Prior to the Internet, questions of reliability and trust were answered, in part, through personal and corporate reputations. Internet services operate on a vastly larger scale than Main Street and permit virtually anonymous interactions. Nevertheless, reputations still play a major role. Systems are emerging that respect anonymity and operate on the Internet's scale. A reputation system collects, distributes, and aggregates feedback about participants' past behavior. Though few producers or consumers of the ratings know one another, these systems help people decide whom to trust, encourage trustworthy behavior, and deter participation by those who are unskilled or dishonest.
Wireless integrated network sensors (WINS) combine microsensor technology and low-power signal processing, computation, and low-cost wireless networking in a compact system. In the context of a security application designed to detect and identify threats within some geographic region and report the decisions concerning the presence and nature of such threats to a remote observer via the Internet, the physical principles leading to consideration of dense sensor networks is described, how energy and bandwidth constraints compel a distributed and layered signal processing architecture are outlined, why network self-organization and reconfiguration are essential is outlined, how to embed WINS nodes in the internet is discussed, and a prototype platform enabling these functions, including remote Internet control and analysis of sensor-network operation is described.
Principal elements of Web personalization include the modeling of Web objects, categorization of objects and subjects, matching between and across objects and/or subjects, and determination of the set of actions to be recommended for personalization. The prerequisite step to any type of usage mining is the identification of a set of server sessions from the raw usage data. The session file obtained in the data preparation stage can be used as the input to a variety of data mining algorithms such as the discovery of association rules or sequential patterns clustering, and classification.
In this increasingly time-constrained world, Internet stores allow consumers to shop from the convenience of remote locations. Yet most of these Internet stores are losing money. The explanation may lie in the risks associated with Internet shopping. These risks may arise because consumers are concerned about the security of transmitting credit card information over the Internet. Consumers may also be apprehensive about buying something without touching or feeling it and being unable to return it if it fails to meet their approval. This article attempts to determine why certain consumers are drawn to the Internet and why others are not.
Some say the public is too trusting online; without thinking, people routinely download software likely to destroy important information or blithely engage in e-auctions or chat rooms with strangers. Others say that the public does not trust enough, that people refrain from e-commerce under the mistaken belief that their financial transactions are not secure. Perhaps the greatest difference between trust online and in all other contexts is that when online, it is more difficult to assess the potential harm and good will of others, as well as what counts as reasonable machine performance. This is why people can engage in virtually identical online interactions, yet reach widely disparate judgments about whether the interactions are trustworthy.
There remains much work to be done by ID systems engineers in the design, integration, and deployment of efficient, robust, and reliable ID systems capable of reliably identifying and tracking hostile objects in cyberspace. Multisensor data fusion provides an important functional framework for building next-generation ID systems and cyberspace situational awareness. A brief review of ID concepts and terms and an overview of the art and science of multisensor data-fusion technology are presented, and the ID systems data-mining environment is introduced as a complementary process to the ID system data-fusion model. Multisensor data fusion, or distributed sensing, is a relatively new engineering discipline used to combine data from multiple and diverse sensors and sources in order to make inferences about events, activities, and situations. Situation knowledge of cyberspace is used to analyze objects and aggregated groups against existing ID templates to provide an assessment of the current situation and suggest or identify future threatening attacks or cyberspace activity.
The computer science research community now enjoys a rare and exciting opportunity to redefine its agenda and establish new goals that will propel society beyond interactive computing and the human/machine breakpoint. The challenge is to develop mechanisms that allow for the software properties that have been exploited in the development of the virtual worlds to be pushed as close as possible to the physical environment. In addition to passing the one computer per person breakpoint, many proactive environments will pass a significant real time breakpoint, operating at faster than human speeds. Getting humans out of the loop and into supervisory and policy-making roles is a necessity for systems with faster than human response times. Declaring at least a partial victory on the interactive front represents a tremendous opportunity to reexamine assumptions and revisit questions that cross-cut many aspects of computer science.
New social traditions are needed to enhance cooperative behaviors in electronic environments supporting e-commerce, e-services, and online communities. Since users of online systems cannot savor a cup of tea with an electronic rug merchant, designers must develop strategies for facilitating e-commerce and auctions. Since users cannot make eye contact and judge intonations with an online lawyer or physician, designers must create new social norms for professional services. Since users cannot stroll through online communities encountering neighbors with their children, designers must facilitate the trust that enables collect action. In parallel, consumer groups must be vigorous in monitoring and reporting deceptions and disreputable business practices.
The business architecture layer defines the organizational structure and worklfows for business rules and processes. It is a conceptual level expressed in terms meaningful to actual users of application systems. The application architecture layer defines the actual implementation of the business concepts in terms of enterprise applications. At this layer, it is challenged to achieve the business requirements. In practice, the business architectures of the individual organizational units cannot be treated in isolation: the business processes of cooperating units are highly interrelated and should be handled as such. Certain kinds of interactions among computer systems resemble interactions among people; thus it is important to consider all levles when integrating those systems. A horizontal integration of the layers is required to support the business processes effectively.
The growing market for networked storage is a result of the exploding demand for storage capacity in an increasingly Internet-dependent world and its tight labor market. Storage area networks (SAN) and network attached storage (NAS) are two proven approaches to networking storage. Technically, including a file system in a storage subsystem differentiates NAS, which has one, from SAN, which does not. In practice, however, it is often NAS's close association with Ethernet network hardware and SAN with Fibre Channel network hardware that has a greater effect on a user's purchasing decisions. This article is about how emerging technology may blur the network-centric distinction between NAS and SAN.
An amorphous computing medium is a system of irregularly placed, asynchronous, locally interacting computing elements. One critical task of amorphous computing is to identify appropriate organizing principles and programming methodologies for controlling amorphous systems. Biology as an actual implementaton technology for amorphous systems by means of cellular computing, which constructs logic circuits within living cells, is discussed. Physics, as well as biology, can be a source of new metaphors for amorphous computing. A major impetus for the study of amorphous computing is that the ability to program amorphous systems would greatly expand the set of physical substrates available to support information processing. One striking possibility is that we could create a programming technology based on living cells. The vision of cellular computing is to harness chemical mechanisms to organize and control biological processes, just as electrical mechanisms are used to control electrical processes.
Biometrics refers to automatic identification of a person based on his or her physiological or behavioral characteristics. It provides a better solution for the increased security requirements of the information society than traditional identification methods such as passwords and PINs. As biometric sensors become less expensive and miniaturized, and as the public realizes that biometrics actually an effective strategy for protection of privacy and from fraud, this technology is likely to be used in almost every transaction of personal identity.
Outsourcing evaluations often result from the frustrations caused by different stakeholder expectations and perceptions of IT performance. This belief is based on an analysis of what IT managers can realistically achieve versus what senior executives and users expect them to achieve. Different stakeholder perspectives set unrealistic performance expectations for IT managers, leading to frustration, loss of faith in internal IT management, and hopes that outsourcing vendors will provide the solutions. While outsourcing can lead to a reduction in IT costs, reduction often comes at a price: reduced service. Moreover, most of the cost savings come from the implementation of key cost reduction strategies such as data center consolidation, unit cost items, and standardized software, rather than economies of scale, internal IT department should be able to reduce costs on their own. And indeed they did.
A multimodal architecture can function more robustly than any individual recognition technology that is inherently error prone, including spoken-language systems. One goal of a well-designed multimodal system is the integration of complementary input modes to create a synergistic blend, permitting the strengths of each mode to overcome weaknesses in the other modes and to support mutual compensation of recognition errors. The focus is on recognition errors as a problem for spoken-language systems, especially when processing diverse speaker styles or speech produced in noisy field settings. However, when speech is combined with another input mode within a multimodal architecture, recent research has shown that two modes can function better than one alone. An outline is given of when and why multimodal systems display error-handling advantages.
Wireless and mobile networks have provided the flexibility required for an increasingly mobile workforce. The choice of media access control can affect both performance and use of wireless networks. Mobile and wireless networks represent the next wave of networking because of their usefulness in information-oriented society. However, mobile and wireless networks also present many challenges to application, hardware, software, and network designers and implementers. During the past five years, research has focused on systematically alleviating the limitations of wireless and mobile environments. Several types of wireless networks are discussed, including wireless LAN, wireless local loops, satellites, and middleware and applications.
Only recently have human-computer interface designers taken the human-computer conversation metaphor seriously enough to attempt to design a computer that could hold up its end of the conversation with a human user. Some of the features of human-human conversation being implemented in this new genre of embodied conversational agent are described and a notable embodied conversational agent - named Rea - is explored based on these features. A set of five properties of human conversation had to be modeled for the system to be able to demonstrate those patterns: 1. function rather than behavior, 2. synchronization, 3. division between propositional and interactional contributions, 4. multithreadedness, and 5. entrainment. Rea's verbal and nonverbal behaviors are designed in terms of these properties: 1. humanlike body, 2. feedback and turn requests, 3. user discourse model, and 4. conversational functions.
An important problem relating to personalization concerns understanding how a machine can help an individual user via suggesting recommendations. It seems that user control over system recommendation for query reformulation is important to users with respect to their main task. But control of what terms are actually suggested is not very important. Rather, having to engage in the subsidiary task distracts them from what they actually need to do.
Adaptive Web sites semiautomatically improve their organization and presentation by learning from visitor access patterns. An index page is a page consisting of links to a set of pages that cover a particular topic. Index pages are central to site organization. IndexFinder does most of the work of index page synthesis automatically, leaving to the Web master the questions of whether and where the page should be added to the site, and how it should be titled.
Yahoo! was one of the first sites on the Web to use personalization on a large scale, most notably with its My Yahoo! application. My Yahoo! is a customized personal copy of Yahoo! Users can select from hundreds of modules, such as news, stock prices, weather, and sports scores, and place them on one or more Web pages. The actual content for each module is then updated automatically, so users can see what they want to see in the order they want to see it. Personal information about Yahoo! users is maintained in a specially designed User Database. Observations and insights about large-scale Web personalization include: 1. Most users take what is given to them and never customize. 2. A great deal of effort should go into the default page. 3. Power users will do amazing things - never underestimate them. 4. Customization should follow you as much as possible. 5. People generally do not understand the concept of customization. 6. Learn from users. Scalability is essential.