Theoretical ideas and experimental results concerning high-temperature superconductors are reviewed. Special emphasis is given to calculations performed with the help of computers applied to models of strongly correlated electrons proposed to describe the two-dimensional CuO2 planes. The review also includes results using several analytical techniques. The one- and three-band Hubbard models and the t-J model are discussed, and their behavior compared against experiments when available. The author found, among the conclusions of the review, that some experimentally observed unusual properties of the cuprates have a natural explanation through Hubbard-like models. In particular, abnormal features like the mid-infrared band of the optical conductivity sigma(omega), the new states observed in the gap in photoemission experiments, the behavior of the spin correlations with doping, and the presence of phase separation in the copper oxide superconductors may be explained, at least in part, by these models. Finally, the existence of superconductivity in Hubbard-like models is analyzed. Some aspects of the recently proposed ideas to describe the cuprates as having a d(x2-y2) superconducting condensate at low temperatures are discussed. Numerical results favor this scenario over others. It is concluded that computational techniques provide a useful, unbiased tool for studying the difficult regime where electrons are strongly interacting, and that considerable progress can be achieved by comparing numerical results against analytical predictions for the properties of these models. Future directions of the active field of computational studies of correlated electrons are briefly discussed.

A comprehensive review of spatiotemporal pattern formation in systems driven away from equilibrium is presented, with emphasis on comparisons between theory and quantitative experiments. Examples include patterns in hydrodynamic systems such as thermal convection in pure fluids and binary mixtures, Taylor-Couette flow, parametric-wave instabilities, as well as patterns in solidification fronts, nonlinear optics, oscillatory chemical reactions and excitable biological media. The theoretical starting point is usually a set of deterministic equations of motion, typically in the form of nonlinear partial differential equations. These are sometimes supplemented by stochastic terms representing thermal or instrumental noise, but for macroscopic systems and carefully designed experiments the stochastic forces are often negligible. An aim of theory is to describe solutions of the deterministic equations that are likely to be reached starting from typical initial conditions and to persist at long times. A unified description is developed, based on the linear instabilities of a homogeneous state, which leads naturally to a classification of patterns in terms of the characteristic wave vector q0 and frequency omega0 of the instability. Type I, systems (omega0 = 0, q0 not-equal 0) are stationary in time and periodic in space; type III(o) systems (omega0 not-equal, q0 = 0) are periodic in time and uniform in space; and type I(o) systems (omega0 not-equal 0, q0 not-equal 0) are periodic in both space and time. Near a continuous (or supercritical) instability, the dynamics may be accurately described via ''amplitude equations,'' whose form is universal for each type of instability. The specifics of each system enter only through the nonuniversal coefficients. Far from the instability threshold a different universal description known as the ''phase equation'' may be derived, but it is restricted to slow distortions of an ideal pattern. For many systems appropriate starting equations are either not known or too complicated to analyze conveniently. It is thus useful to introduce phenomenological order-parameter models, which lead to the correct amplitude equations near threshold, and which may be solved analytically or numerically in the nonlinear regime away from the instability. The above theoretical methods are useful in analyzing ''real pattern effects'' such as the influence of external boundaries, or the formation and dynamics of defects in ideal structures. An important element in nonequilibrium systems is the appearance of deterministic chaos. A greal deal is known about systems with a small number of degrees of freedom displaying ''temporal chaos,'' where the structure of the phase space can be analyzed in detail. For spatially extended systems with many degrees of freedom, on the other band, one is dealing with spatiotemporal chaos and appropriate methods of analysis need to be developed. In addition to the general features of nonequilibrium pattern formation discussed above, detailed reviews of theoretical and experimental work on many specific systems are presented. These include Rayleigh-Benard convection in a pure fluid, convection in binary-fluid mixtures, electrohydrodynamic convection in nematic liquid crystals, Taylor-Couette flow between rotating cylinders, parametric surface waves, patterns in certain open flow systems, oscillatory chemical reactions, static and dynamic patterns in biological media, crystallization fronts, and patterns in nonlinear optics. A concluding section summarizes what has and has not been accomplished, and attempts to assess the prospects for the future.

The study of simple metal clusters has burgeoned in the last decade, motivated by the growing interest in the evolution of physical properties from the atom to the bulk solid, a progression passing through the domain of atomic clusters. On the experimental side, the rapid development of new techniques for producing the clusters and for probing and detecting them has resulted in a phenomenal increase in our knowledge of these systems. For clusters of the simplest metals, the alkali and noble metals, the electronic structure is dominated by the number of valence electrons, and the ionic cores are of secondary importance. These electrons are delocalized, and the electronic system exhibits a shell structure that is closely related to the well-known nuclear shell structure. In this article the results from a broad range of experiments are reviewed and compared with theory. Included are the behavior of the mass-abundance spectra, polarizabilities, ionization potentials, photoelectron spectra, optical spectra, and fragmentation phenomena.

Magnetoencephalography (MEG) is a noninvasive technique for investigating neuronal activity in the living human brain. The time resolution of the method is better than 1 ms and the spatial discrimination is, under favorable circumstances, 2-3 mm for sources in the cerebral cortex. In MEG studies, the weak 10 ff-1 pT magnetic fields produced by electric currents flowing in neurons are measured with multichannel SQUID (superconducting quantum interference device) gradiometers. The sites in the cerebral cortex that are activated by a stimulus can be found from the detected magnetic-field distribution, provided that appropriate assumptions about the source render the solution of the inverse problem unique. Many interesting properties of the working human brain can be studied, including spontaneous activity and signal processing following external stimuli. For clinical purposes, determination of the locations of epileptic foci is of interest. The authors begin with a general introduction and a short discussion of the neural basis of MEG. The mathematical theory of the method is then explained in detail, followed by a thorough description of MEG instrumentation, data analysis, and practical construction of multi-SQUID devices. Finally, several MEG experiments performed in the authors' laboratory are described, covering studies of evoked responses and of spontaneous activity in both healthy and diseased brains. Many MEG studies by other groups are discussed briefly as well.

Chaotic time series data are observed routinely in experiments on physical systems and in observations in the field. The authors review developments in the extraction of information of physical importance from such measurements. They discuss methods for (1) separating the signal of physical interest from contamination (''noise reduction''), (2) constructing an appropriate state space or phase space for the data in which the full structure of the strange attractor associated with the chaotic observations is unfolded, (3) evaluating invariant properties of the dynamics such as dimensions, Lyapunov exponents, and topological characteristics, and (4) model making, local and global, for prediction and other goals. They briefly touch on the effects of linearly filtering data before analyzing it as a chaotic time series. Controlling chaotic physical systems and using them to synchronize and possibly communicate between source and receiver is considered. Finally, chaos in space-time systems, that is, the dynamics of fields, is briefly considered. While much is now known about the analysis of observed temporal chaos, spatio-temporal chaotic systems pose new challenges. The emphasis throughout the review is on the tools one now has for the realistic study of measured data in laboratory and field settings. lt is the goal of this review to bring these tools into general use among physicists who study classical and semiclassical systems. Much of the progress in studying chaotic systems has rested on computational tools with some underlying rigorous mathematics. Heuristic and intuitive analysis tools guided by this mathematics and realizable on existing computers constitute the core of this review.

The jellium model of simple metal clusters has enjoyed remarkable empirical success, leading to many theoretical questions. In this review. we first survey the hierarchy of theoretical approximations leading to the model. We then describe the jellium model in detail, including various extensions. One important and useful approximation is the local-density approximation to exchange and correlation effects, which greatly simplifies self-consistent calculations of the electronic structure. Another valuable tool is the semiclassical approximation to the single-particle density matrix, which gives a theoretical framework to connect the properties of large clusters with the bulk and macroscopic surface properties. The physical properties discussed in this review are the ground-state binding energies, the ionization potentials, and the dipole polarizabilities. We also treat the collective electronic excitations from the point of view of the cluster response, including some useful sum rules.

The stability or lack thereof of nonrelativistic fermionic systems to interactions is studied within the renormalization-group (RG) framework, in close analogy with the study of critical phenomena using phi4 scalar field theory. A brief introduction to phi4 theory in four dimensions and the path-integral formulation for fermions is given before turning to the problem at hand. As for the latter, the following procedure is used. First, the modes on either side of the Fermi surface within a cutoff A are chosen for study, in analogy with the modes near the origin in phi4 theory, and a path integral is written to describe them. Next, an RG transformation that eliminates a part of these modes, but preserves the action of the noninteracting system, is identified. Finally the possible perturbations of this free-field fixed point are classified as relevant, irrelevant or marginal. A d = 1 warmup calculation involving a system of fermions shows how, in contrast to mean-field theory, which predicts a charge-density wave for arbitrarily weak repulsion, and superconductivity for arbitrarily weak attraction, the renormalization-group approach correctly yields a scale-invariant system (Luttinger liquid) by taking into account both instabilities. Application of the renormalization group in d = 2 and 3, for rotationally invariant Fermi surfaces, automatically leads to Landau's Fermi-liquid theory, which appears as a fixed point characterized by an effective mass and a Landau function F, with the only relevant perturbations being of the superconducting (BCS) type. The functional flow equations for the BCS couplings are derived and separated into an infinite number of flows, one for each angular momentum. It is shown that similar results hold for rotationally noninvariant (but time-reversal-invariant) Fermi surfaces also, with obvious loss of rotational invariance in the parametrization of the fixed-point interactions. A study of a nested Fermi surface shows an additional relevant flow leading to charge-density-wave formation. It is pointed out that, for small LAMBDA/K(F), a 1/N expansion emerges, with N = K(F)/LAMBDA, which explains why one is able to solve the narrow-cutoff theory. The search for non-Fermi liquids in d = 2 using the RG is discussed. Bringing a variety of phenomena (Landau theory, charge-density waves, BCS instability, nesting, etc.) under the one unifying principle of the RG not only allows us to better understand and unify them, but also paves the way for generalizations and extensions. The article is pedagogical in nature and is expected to be accessible to any serious graduate student. On the other hand, its survey of the vast literature is mostly limited to the RG approach.

In this paper, theoretical and experimental approaches to flow, hydrodynamic dispersion, and miscible and immiscible displacement processes in reservoir rocks are reviewed and discussed. Both macroscopically homogeneous and heterogeneous rocks are considered. The latter are characterized by large-scale spatial variations and correlations in their effective properties and include rocks that may be characterized by several distinct degrees of porosity, a well-known example of which is a fractured rock with two degrees of porosity those of the pores and of the fractures. First, the diagenetic processes that give rise to the present reservoir rocks are discussed and a few geometrical models of such processes are described. Then, measurement and characterization of important properties, such as pore-size distribution, pore-space topology, and pore surface roughness, and morphological properties of fracture networks are discussed. lt is shown that fractal and percolation concepts play important roles in the characterization of rocks, from the smallest length scale at the pore level to the largest length scales at the fracture and fault scales. Next, various structural models of homogeneous and heterogeneous rock are discussed, and theoretical and computer simulation approaches to flow, dispersion, and displacement in such systems are reviewed. Two different modeling approaches to these phenomena are compared. The first approach is based on the classical equations of transport supplemented with constitutive equations describing the transport and other important coefficients and parameters. These are called the continuum models. The second approach is based on network models of pore space and fractured rocks; it models the phenomena at the smallest scale, a pore or fracture, and then employs large-scale simulation and modern concepts of the statistical physics of disordered systems, such as scaling and universality, to obtain the macroscopic properties of the system. The fundamental roles of the interconnectivity of the rock and its wetting properties in dispersion and two-phase flows, and those of microscopic and macroscopic heterogeneities in miscible displacements are emphasized. Two important conceptual advances for modeling fractured rocks and studying flow phenomena in porous media are also discussed. The first, based on cellular automata, can in principle be used for computing macroscopic properties of flow phenomena in any porous medium, regardless of the complexity of its structure. The second, simulated annealing, borrowed from optimization processes and the statistical mechanics of spin glasses, is used for finding the optimum structure of a fractured reservoir that honors a limited amount of experimental data.

Irreversible random sequential adsorption (RSA) on lattices, and continuum ''car parking'' analogues, have long received attention as models for reactions on polymer chains, chemisorption on single-crystal surfaces, adsorption in colloidal systems, and solid state transformations. Cooperative generalizations of these models (CSA) are sometimes more appropriate, and can exhibit richer kinetics and spatial structure, e.g., autocatalysis and clustering. The distribution of filled or transformed sites in RSA and CSA is not described by an equilibrium Gibbs measure. This is the case even for the saturation ''jammed'' state of models where the lattice or space cannot fill completely. However exact analysis is often possible in one dimension, and a variety of powerful analytic methods have been developed for higher dimensional models. Here we review the detailed understanding of asymptotic kinetics, spatial correlations, percolative structure, etc., which is emerging for these far-from-equilibrium processes.

Although skeptical of the prohibitive power of no-hidden-variables theorems, John Bell was himself responsible for the two most important ones. I describe some recent versions of the lesser known of the two (familiar to experts as the ''Kochen-Specker theorem'') which have transparently simple proofs. One of the new versions can be converted without additional analysis into a powerful form of the very much better known ''Bell's Theorem,'' thereby clarifying the conceptual link between these two results of Bell.

It is becoming increasingly clear that the concept of a diquark (a two-quark system) is important for understanding hadron structure and high-energy particle reactions. According to our present knowledge of quantum chromodynamics (QCD), diquark correlations arise in part from spin-dependent interactions between two quarks, from quark radial or orbital excitations, and from quark mass differences. Diquark substructures affect the static properties of baryons and the mechanisms of baryon decay. Diquarks also play a role in hadron production in hadron-initiated reactions, deep-inelastic lepton scattering by hadrons, and in e+e- reactions. Diquarks are important in the formation and properties of baryonium and mesonlike semistable states. Many spin effects observed in high-energy exclusive reactions pose severe problems for the pure quark picture of baryons and might be explained by the introduction of diquarks as hadronic constituents. There is considerable controversy, not about the existence of diquarks in hadrons, but about their properties and their effects. In this work a broad selection of the main ideas about diquarks is reviewed.

The authors present an overview of ongoing studies of the rich dynamical behavior of the uniform, deterministic Burridge-Knopoff model of an earthquake fault, discussing the model's behavior in the context of current seismology. The topics considered include: (1) basic properties of the model, such as the distinction between small and large events and the magnitude vs frequency distribution; (2) dynamics of individual events, including dynamical selection of rupture propagation speeds; (3) generalizations of the model to more realistic, higher-dimensional models; and (4) studies of predictability, in which artificial catalogs generated by the model are used to test and determine the limitations of pattern recognition algorithms used in seismology.

A summary is presented of the statistical mechanical theory of learning a rule with a neural network, a rapidly advancing area which is closely related to other inverse problems frequently encountered by physicists. By emphasizing the relationship between neural networks and strongly interacting physical systems, such as spin glasses, the authors show how learning theory has provided a workshop in which to develop new, exact analytical techniques.

Correlation functions are one of the key tools used to study the structure of the QCD vacuum. They are constructed out of the fundamental fields and can be calculated using quantum-field-theory methods, such as lattice gauge theory. One can obtain many of these functions using the rich phenomenology of hadron physics. They are also the object of study in various quark models of hadronic structure. This review begins with available phenomenological information about the correlation functions, with their most important properties emphasized. These are then compared with predictions of various theoretical approaches, including lattice numerical simulations, the operator product expansion, and the interacting instanton approximation.

Electrophotography is one means of arranging 100 million pigmented plastic particles on a sheet of paper to faithfully replicate an original. It is based on many diverse phenomena and employs many properties of matter. These include gaseous ionization in the charging step; photogeneration and charge transport through disordered solid-state materials in the latent-image-formation step; triboelectricity in the particle-charging step; mechanical, electrostatic, and magnetic forces to detach particles in the development and transfer steps; and the application and transfer of heat in the fixing step. In addition, it relies on a precise balance of thermorheological, chemical, and mechanical properties of large area films and small particles. This article reviews the physics of the latent-image formation and development steps.

Over the past seven years, many examples of periodic crystals closely related to quasicrystalline alloys have been discovered. These crystals have been termed approximants, since the arrangements of atoms within their unit cells closely approximate the local atomic structures in quasicrystals. This colloquium focuses on these approximant structures, their description, and their relationship to quasicrystals.

Molecular-beam experiments have exposed a new wealth of detail on the general reaction A*+B>A+B++e- first suggested by Penning in 1927. The new capabilities not available to traditional swarm techniques include mass and electron spectroscopy on the reaction products and angle-resolved measurements of the scattering of both reagents and products. These new results have stimulated the recent development of both the electronic structure and the dynamical theories necessary for a first-principles description of at least the simplest of these reactions, those involving small atomic and diatomic species B. Recent progress in both experiment and interpretation is critically reviewed, and the prospects for attaining a global understanding of Penning ionization in larger systems are