The pipeline processor is a common paradigm for very high speed computing machinery. Pipeline processors provide high speed because their separate stages can operate concurrently, much as different people on a manufacturing assembly line work concurrently on material passing down the line. Although the concurrency of pipeline processors makes their design a demanding task, they can be found in graphics processors, in signal processing devices, in integrated circuit components for doing arithmetic, and in the instruction interpretation units and arithmetic operations of general purpose computing machinery. Because I plan to describe a variety of pipeline processors, I will start by suggesting names for their various forms. Pipeline processors, or more simply just pipelines, operate on data as it passes along them. The latency of a pipeline is a measure of how long it takes a single data value to pass through it. The throughput rate of a pipeline is a measure of how many data values can pass through it per unit time. Pipelines both store and process data; the storage elements and processing logic in them alternate along their length. I will describe pipelines in their complete form later, but first I will focus on their storage elements alone, stripping away all processing logic. Stripped of all processing logic, any pipeline acts like a series of storage elements through which data can pass. Pipelines can be clocked or event-driven, depending on whether their parts act in response to some widely-distributed external clock, or act independently whenever local events permit. Some pipelines are inelastic; the amount of data in them is fixed. The input rate and the output rate of an inelastic pipeline must match exactly. Stripped of any processing logic, an inelastic pipeline acts like a shift register. Other pipelines are elastic; the amount of data in them may vary. The input rate and the output rate of an elastic pipeline may differ momentarily because of internal buffering. Stripped of all processing logic, an elastic pipeline becomes a flow-through first-in-first-out memory, or FIFO. FIFOs may be clocked or event-driven; their important property is that they are elastic. I assign the name micropipeline to a particularly simple form of event-driven elastic pipeline with or without internal processing. The micro part of this name seems appropriate to me because micropipelines contain very simple circuitry, because micropipelines are useful in very short lengths, and because micropipelines are suitable for layout in microelectronic form. I have chosen micropipelines as the subject of this lecture for three reasons. First, micropipelines are simple and easy to understand. I believe that simple ideas are best, and I find beauty in the simplicity and symmetry of micropipelines. Second, I see confusion surrounding the design of FIFOs. I offer this description of micropipelines in the hope of reducing some of that confusion. The third reason I have chosen my subject addresses the limitations imposed on us by the clocked-logic conceptual framework now commonly used in the design of digital systems. I believe that this conceptual framework or mind set masks simple and useful structures like micropipelines from our thoughts, structures that are easy to design and apply given a different conceptual framework. Because micropipelines are event-driven, their simplicity is not available within the clocked-logic conceptual framework. I offer this description of micropipelines in the hope of focusing attention on an alternative transition-signalling conceptual framework. We need a new conceptual framework because the complexity of VLSI technology has now reached the point where design time and design cost often exceed fabrication time and fabrication cost. Moreover, most systems designed today are monolithic and resist mid-life improvement. The transition-signalling conceptual framework offers the opportunity to build up complex systems by hierarchical composition from simpler pieces. The resulting systems are easily modified. I believe that the transition-signalling conceptual framework has much to offer in reducing the design time and cost of complex systems and increasing their useful lifetime. I offer this description of micropipelines as an example of the transition-signalling conceptual framework. Until recently only a hardy few used the transition-signalling conceptual framework for design because it was too hard. It was nearly impossible to design the small circuits of 10 to 100 transistors that form the elemental building blocks from which complex systems are composed. Moreover, it was difficult to prove anything about the resulting compositions. In the past five years, however, much progress has been made on both fronts. Charles Molnar and his colleagues at Washington University have developed a simple way to design the small basic building blocks . Martin Rem's "VLSI Club" at the Technical University of Eindhoven has been working effectively on the mathematics of event-driven systems [6, 10, 11, 19]. These emerging conceptual tools now make transition signalling a lively candidate for widespread use.
Developing computer-based information systems necessarily involves making a number of implicit and explicit assumptions. The authors examine four different approaches to information systems development.
Two parallel thinning algorithms are presented and evaluated in this article. The two algorithms use two-subiteration approaches: (1) alternatively deleting north and east and then south and west boundary pixels and (2) alternately applying a thinning operator to one of two subfields. Image connectivities are proven to be preserved and the algorithms' speed and medial curve thinness are compared to other two-subiteration approaches and a fully parallel approach. Both approaches produce very thin medial curves and the second achieves the fastest overall parallel thinning.
Designers striving for user interface consistency can resemble Supreme Court justices trying to define pornography: each of us feels we know it when we see it, but people often disagree and a precise definition remains elusive. A close examination suggests that consistency is an unreliable guide and that designers would often do better to focus on users' work environments.
The final report of the Task Force on the Core of Computer Science presents a new intellectual framework for the discipline of computing and a new basis for computing curricula. This report has been endorsed and approved for release by the ACM Education Board.
Software systems development has been plagued by cost overruns, late deliveries, poor reliability, and user dissatisfaction. This article presents a paradigm for the study of software project management that is grounded in the feedback systems principles of system dynamics.
Over the past few years, I have developed an approach to the formal specification of concurrent systems that I now call the transition axiom method. The basic formalism has already been described in  and , but the formal details tend to obscure the important concepts. Here, I attempt to explain these concepts without discussing the details of the underlying formalism. Concurrent systems are not easy to specify. Even a simple system can be subtle, and it is often hard to find the appropriate abstractions that make it understandable. Specifying a complex system is a formidable engineering task. We can understand complex structures only if they are composed of simple parts, so a method for specifying complex systems must have a simple conceptual basis. I will try to demonstrate that the transition axiom method provides such a basis. However, I will not address the engineering problems associated with specifying real systems. Instead, the concepts will be illustrated with a series of toy examples that are not meant to be taken seriously as real specifications. Are you proposing a specification language? No. The transition axiom method provides a conceptual and logical foundation for writing formal specifications; it is not a specification language. The method determines what a specification must say; a language determines in detail how it is said. What do you mean by a formal specification? I find it helpful to view a specification as a contract between the user of a system and its implementer. The contract should tell the user everything he must know to use the system, and it should tell the implementer everything he must know about the system to implement it. In principle, once this contract has been agreed upon, the user and the implementer have no need for further communication. (This view describes the function of the specification; it is not meant as a paradigm for how systems should be built.) For a specification to be formal, the question of whether an implementation satisfies the specification must be reducible to the question of whether an assertion is provable in some mathematical system. To demonstrate that he has met the terms of the contract, the implementer should resort to logic rather than contract law. This does not mean that an implementation must be accompanied by a mathematical proof. It does mean that it should be possible, in principle though not necessarily in practice, to provide such a proof for a correct implementation. The existence of a formal basis for the specification method is the only way I know to guarantee that specifications are unambiguous. Ultimately, the systems we specify are physical objects, and mathematics cannot prove physical properties. We can prove properties only of a mathematical model of the system; whether or not the system correctly implements the model must remain a question of law and not of mathematics. Just what is a system? By "system," I mean anything that interacts with its environment in a discrete (digital) fashion across a well-defined boundary. An airline reservation system is such a system, where the boundary might be drawn between the agents using the system, who are part of the environment, and the terminals, which are part of the system. A Pascal procedure is a system whose environment is the rest of the program, with which it interacts by responding to procedure calls and accessing global variables. Thus, the system being specified may be just one component of a larger system. A real system has many properties, ranging from its response time to the color of the cabinet. No formal method can specify all of these properties. Which ones can be specified with the transition axiom method? The transition axiom method specifies the behavior of a System-that is, the sequence of observable actions it performs when interacting with the environment. More precisely, it specifies two classes of behavioral properties: safety and liveness properties. Safety properties assert what the system is allowed to do, or equivalently, what it may not do. Partial correctness is an example of a safety property, asserting that a program may not generate an incorrect answer. Liveness properties assert what the system must do. Termination is an example of a liveness property, asserting that a program must eventually generate an answer. (Alpern and Schneider  have formally defined these two classes of properties.) In the transition axiom method, safety and liveness properties are specified separately. There are important behavioral properties that cannot be specified by the transition axiom method; these include average response time and probability of failure. A transition axiom specification can provide a formal model with which to analyze such properties, 1 but it cannot formally specify them. There are also important nonbehavioral properties of systems that one might want to specify, such as storage requirements and the color of the cabinet. These lie completely outside the realm of the method. Why specify safety and liveness properties separately? There is a single formalism that underlies a transition axiom specification, so there is no formal separation between the specification of safety and liveness properties. However, experience indicates that different methods are used to reason about the two kinds of properties and it is convenient in practice to separate them. I consider the ability to decompose a specification into liveness and safety properties to be one of the advantages of the method. (One must prove safety properties in order to verify liveness properties, but this is a process of decomposing the proof into smaller lemmas.) Can the method specify real-time behavior? Worst-case behavior can be specified, since the requirement that the system must respond within a certain length of time can be expressed as a safety property-namely, that the clock is not allowed to reach a certain value without the system having responded. Average response time cannot be expressed as a safety or liveness property. The transition axiom method can assert that some action either must occur (liveness) or must not occur (safety). Can it also assert that it is possible for the action to occur? No. A specification serves as a contractual constraint on the behavior of the system. An assertion that the system may or may not do something provides no constraint and therefore serves no function as part of the formal specification. Specification methods that include such assertions generally use them as poor substitutes for liveness properties. Some methods cannot specify that a certain input must result in a certain response, specifying instead that it is possible for the input to be followed by the response. Every specification I have encountered that used such assertions was improved by replacing the possibility assertions with liveness properties that more accurately expressed the system's informal requirements. Imprecise wording can make it appear that a specification contains a possibility assertion when it really doesn't. For example, one sometimes states that it must be possible for a transmission line to lose messages. However, the specification does not require that the loss of messages be possible, since this would prohibit an implementation that guaranteed no messages were lost. The specification might require that something happens (a liveness property) or doesn't happen (a safety property) despite the loss of messages. Or, the statement that messages may be lost might simply be a comment about the specification, observing that it does not require that all messages be delivered, and not part of the actual specification. If a safety property asserts that some action cannot happen, doesn't its negation assert that the action is possible? In a formal system, one must distinguish the logical formula A from the assertion ⊢ A , which means that A is provable in the logic; ⊢ A is not a formula of the logic itself. In the logic underlying the transition axiom method, if A represents a safety property asserting that some action is impossible, then the negation of A , which is the formula ┐ A , asserts that the action must occur. The action's possibility is expressed by the negation of ⊢ A , which is a metaformula and not a formula within the logic. See  for more details.
The impact of a computerized record system on the work lives of customer service representatives in a large utility company is examined. The results suggest alternate methods for thinking about and measuring technological impact.
When a thing is new, people say: "It is not true." Later, when its truth becomes obvious, they say: "It is not important." Finally, when its importance cannot be denied, they say: "Anyway, it is not new." Adapted from William James
Preoccupations about the present and future evolution of MIS as a scientific field seem to be gaining popularity among researchers. The authors contend that most models used by the investigators of the MIS field have been based on an inappropriate monistic view of science.
Identifying variables that predict computer aptitude can help educators and employers target potential students and employees. The authors examine a number of possible explanatory variables including demographic profiles, high school achievements, prior computer training and experience, cognitive styles, and problem-solving abilities.
Several methods are presented for adaptive, invertible data compression in the style of Lempel's and Ziv's first textual substitution proposal. For the first two methods, the article describes modifications of McCreight's suffix tree data structure that support cyclic maintenance of a window on the most recent source characters. A percolating update is used to keep node positions within the window, and the updating process is shown to have constant amortized cost. Other methods explore the tradeoffs between compression time, expansion time, data structure size, and amount of compression achieved. The article includes a graph-theoretic analysis of the compression penalty incurred by our codeword selection policy in comparison with an optimal policy, and it includes empirical studies of the performance of various adaptive compressors from the literature.
ONCOCIN is a medical expert system that extends the skeletal-planning technique to an applciation area where the history of past events and the duration of actions are important. The system's knowledge base is designed to reflect a hierarchical model of the domain, and its control and inference mechanisms encourage a mixed-initiative style of interaction between the computer and the user.
Bloom filter technique of hashing finds several applications, such as in efficient maintenance of differential files, space efficient storage of dictionaries, and parallel free-text searching. The performance of hash transformations with reference to the filter error rate is the focus of this article.
An experiment is designed to investigate the relationship between system structure and maintainability. An old, ill-structured system is improved in two sequential stages, yielding three system versions for the study. The primary objectives of the research are to determine how or whether the differences in the systems influence maintenance performance; whether the differences are discernible to programmers; and whether the differences are measurable. Experienced programmers perform a portfolio of maintenance tasks on the systems. Results indicate that system improvements lead to decreased total maintenance time and decreased frequency of ripple effect errors. This suggests that improving old systems may be worthwhile and may yield benefits over the remaining life of the system. System differences are not discernible to programmers; apparently programmers are unable to separate the complexity of the systems from the complexity of the maintenance tasks. This finding suggests a need for further research on the efficacy of subjectively based software metrics. Finally, results indicate that a selected set of automatable, objective complexity metrics reflected both the improvements in the system and programmer maintenance performance. These metrics appear to offer potential as project management tools.
The user cube, a precise and comprehensive graphical taxonomy designed to provide a common base for understanding and classifying end users in organizations, can be used to assess the risks associated with end-user computing.