We present an introduction to and a tutorial on the properties of the recently discovered ideal circuit element, a memristor. By definition, a memristor M relates the charge q and the magnetic flux phi in a circuit and complements a resistor R, a capacitor C and an inductor L as an ingredient of ideal electrical circuits. The properties of these three elements and their circuits are a part of the standard curricula. The existence of the memristor as the fourth ideal circuit element was predicted in 1971 based on symmetry arguments, but was clearly experimentally demonstrated just last year. We present the properties of a single memristor, memristors in series and parallel, as well as ideal memristor-capacitor (MC), memristor-inductor (ML) and memristor capacitor-inductor (MCL) circuits. We find that the memristor has hysteretic current-voltage characteristics. We show that the ideal MC (ML) circuit undergoes non-exponential charge (current) decay with two time scales and that by switching the polarity of the capacitor, an ideal MCL circuit can be tuned from overdamped to underdamped. We present simple models which show that these unusual properties are closely related to the memristor's internal dynamics. This tutorial complements the pedagogy of ideal circuit elements (R, C and L) and the properties of their circuits, and is aimed at undergraduate physics and electrical engineering students.
The fact that relatively simple entities, such as particles or neurons, or even ants or bees or humans, give rise to fascinatingly complex behaviour when interacting in large numbers is the hallmark of complex systems science. Agent-based models are frequently employed for modelling and obtaining a predictive understanding of complex systems. Since the sheer number of equations that describe the behaviour of an entire agent-based model often makes it impossible to solve such models exactly, Monte Carlo simulation methods must be used for the analysis. However, unlike pairwise interactions among particles that typically govern solid-state physics systems, interactions among agents that describe systems in biology, sociology or the humanities often involve group interactions, and they also involve a larger number of possible states even for the most simplified description of reality. This begets the question: when can we be certain that an observed simulation outcome of an agent-based model is actually stable and valid in the large system-size limit? The latter is key for the correct determination of phase transitions between different stable solutions, and for the understanding of the underlying microscopic processes that led to these phase transitions. We show that a satisfactory answer can only be obtained by means of a complete stability analysis of subsystem solutions. A subsystem solution can be formed by any subset of all possible agent states. The winner between two subsystem solutions can be determined by the average moving direction of the invasion front that separates them, yet it is crucial that the competing subsystem solutions are characterised by a proper composition and spatiotemporal structure before the competition starts. We use the spatial public goods game with diverse tolerance as an example, but the approach has relevance for a wide variety of agent-based models.
We discuss the categorization of 20 quantum mechanics problems by physics professors and undergraduate students from two honours-level quantum mechanics courses. Professors and students were asked to categorize the problems based upon similarity of solution. We also had individual discussions with professors who categorized the problems. Faculty members' categorizations were overall rated higher than those of students by three faculty members who evaluated all of the categorizations. The categories created by faculty members were more diverse compared to the categories they created for a set of introductory mechanics problems. Some faculty members noted that the categorization of introductory physics problems often involves identifying fundamental principles relevant for the problem, whereas in upper-level undergraduate quantum mechanics problems, it mainly involves identifying concepts and procedures required to solve the problem. Moreover, physics faculty members who evaluated others' categorizations expressed that the task was very challenging and they sometimes found another person's categorization to be better than their own. They also rated some concrete categories such as 'hydrogen atom' or 'simple harmonic oscillator' higher than other concrete categories such as 'infinite square well' or 'free particle'.
The use of computers in statistical physics is common because the sheer number of equations that describe the behaviour of an entire system particle by particle often makes it impossible to solve them exactly. Monte Carlo methods form a particularly important class of numerical methods for solving problems in statistical physics. Although these methods are simple in principle, their proper use requires a good command of statistical mechanics, as well as considerable computational resources. The aim of this paper is to demonstrate how the usage of widely accessible graphics cards on personal computers can elevate the computing power in Monte Carlo simulations by orders of magnitude, thus allowing live classroom demonstration of phenomena that would otherwise be out of reach. As an example, we use the public goods game on a square lattice where two strategies compete for common resources in a social dilemma situation. We show that the second-order phase transition to an absorbing phase in the system belongs to the directed percolation universality class, and we compare the time needed to arrive at this result by means of the main processor and by means of a suitable graphics card. Parallel computing on graphics processing units has been developed actively during the last decade, to the point where today the learning curve for entry is anything but steep for those familiar with programming. The subject is thus ripe for inclusion in graduate and advanced undergraduate curricula, and we hope that this paper will facilitate this process in the realm of physics education. To that end, we provide a documented source code for an easy reproduction of presented results and for further development of Monte Carlo simulations of similar systems.
Complex systems are characterised by specific time-dependent interactions among their many constituents. As a consequence they often manifest rich, non-trivial and unexpected behaviour. Examples arise both in the physical and non-physical worlds. The study of complex systems forms a new interdisciplinary research area that cuts across physics, biology, ecology, economics, sociology, and the humanities. In this paper we review the essence of complex systems from a physicists 'point of view, and try to clarify what makes them conceptually different from systems that are traditionally studied in physics. Our goal is to demonstrate how the dynamics of such systems may be conceptualised in quantitative and predictive terms by extending notions from statistical physics and how they can often be captured in a framework of co-evolving multiplex network structures. We mention three areas of complexsystems science that are currently studied extensively, the science of cities, dynamics of societies, and the representation of texts as evolutionary objects. We discuss why these areas form complex systems in the above sense. We argue that there exists plenty of new ground for physicists to explore and that methodical and conceptual progress is needed most.
We provide a short introduction to the field of topological data analysis (TDA) and discuss its possible relevance for the study of complex systems. TDA provides a set of tools to characterise the shape of data, in terms of the presence of holes or cavities between the points. The methods, based on the notion of simplicial complexes, generalise standard network tools by naturally allowing for many-body interactions and providing results robust under continuous deformations of the data. We present strengths and weaknesses of current methods, as well as a range of empirical studies relevant to the field of complex systems, before identifying future methodological challenges to help understand the emergence of collective phenomena.
Thermography is a nondestructive testing (NDT) technique based on the principle that two dissimilar materials, i.e., possessing different thermophysical properties, would produce two distinctive thermal signatures that can be revealed by an infrared sensor, such as a thermal camera. The fields of NDT applications are expanding from classical building or electronic components monitoring to more recent ones such as inspection of artworks or composite materials. Furthermore, thermography can be conveniently used as a didactic tool for physics education in universities given that it provides the possibility of visualizing fundamental principles, such as thermal physics and mechanics among others.
An overlooked straightforward application of velocity reciprocity to a triplet of inertial frames in collinear motion identifies the ratio of their cyclic relative velocities' sum to the negative product as a cosmic invariant-whose inverse square root corresponds to a universal limit speed. A logical indeterminacy of the ratio equation establishes the repeatedly observed unchanged speed of stellar light as one instance of this universal limit speed. This formally renders the second postulate redundant. The ratio equation furthermore enables the limit speed to be quantified-in principle-independently of a limit speed signal. Assuming negligible gravitational fields, two deep-space vehicles in non-collinear motion could measure with only a single clock the limit speed against the speed of light-without requiring these speeds to be identical. Moreover, the cosmic invariant (from dynamics, equal to the mass-to-energy ratio) emerges explicitly as a function of signal response time ratios between three collinear vehicles, multiplied by the inverse square of the velocity of whatever arbitrary signal might be used.
The higher derivatives of motion are rarely discussed in the teaching of classical mechanics of rigid bodies; nevertheless, we experience the effect not only of acceleration, but also of jerk and snap. In this paper we will discuss the third and higher order derivatives of displacement with respect to time, using the trampolines and theme park roller coasters to illustrate this concept. We will also discuss the effects on the human body of different types of acceleration, jerk, snap and higher derivatives, and how they can be used in physics education to further enhance the learning and thus the understanding of classical mechanics concepts.
We have developed and evaluated a quantum interactive learning tutorial (QuILT) on a Mach-Zehnder interferometer with single photons to expose upper-level students in quantum mechanics courses to contemporary quantum optics applications. The QuILT strives to help students develop the ability to apply fundamental quantum principles to physical situations in quantum optics and explore the differences between classical and quantum ideas. The QuILT adapts visualization tools to help students build physical intuition about counter-intuitive quantum optics phenomena with single photons including a quantum eraser setup and focuses on helping them integrate qualitative and quantitative understanding. We discuss findings from in-class evaluations.
Many modern theories which try to unify gravity with the Standard Model of particle physics, such as e.g. string theory, propose two key modifications to the commonly known physical theories: the existence of additional space dimensions; the existence of a minimal length distance or maximal resolution. While extra dimensions have received a wide coverage in publications over the last ten years (especially due to the prediction of micro black hole production at the Large Hadron Collider), the phenomenology of models with a minimal length is still less investigated. In a summer study project for bachelor students in 2010, we have explored some phenomenological implications of the potential existence of a minimal length. In this paper, we review the idea and formalism of a quantum gravity-induced minimal length in the generalized uncertainty principle framework as well as in the coherent state approach to non-commutative geometry. These approaches are effective models which can make model-independent predictions for experiments and are ideally suited for phenomenological studies. Pedagogical examples are provided to grasp the effects of a quantum gravity-induced minimal length. This paper is intended for graduate students and non-specialists interested in quantum gravity.
A solid grasp of the probability distributions for measuring physical observables is central to connecting the quantum formalism to measurements. However, students often struggle with the probability distributions of measurement outcomes for an observable and have difficulty expressing this concept in different representations. Here we first describe the difficulties that upper-level undergraduate and PhD students have with the probability distributions for measuring physical observables in quantum mechanics. We then discuss how student difficulties found in written surveys and individual interviews were used as a guide in the development of a quantum interactive learning tutorial (QuILT) to help students develop a good grasp of the probability distributions of measurement outcomes for physical observables. The QuILT strives to help students become proficient in expressing the probability distributions for the measurement of physical observables in Dirac notation and in the position representation and be able to convert from Dirac notation to position representation and vice versa. We describe the development and evaluation of the QuILT and findings about the effectiveness of the QuILT from in-class evaluations.
The Stoner - Wohlfarth (SW) model is the simplest model that describes adequately the physics of fine magnetic grains, the magnetization of which can be used in digital magnetic storage (floppies, hard disks and tapes). Magnetic storage density is presently increasing steadily in almost the same way as electronic device size and circuitry are shrinking, and magnetism in general appears as a new contender for many novel computing applications that were considered traditionally beyond its range. Denser storage leads to finer magnetic grains and smaller size leads to magnetic grains so fine that they contain a single magnetic domain, i.e. a region in the material with a welldefined uniform magnetization best described with the mathematics of the SW model.
Interviews with students suggest that even though they understand the formalism and the formal nature of quantum theory, they still often desire a mental picture of what the equations describe and some tangible experience with the wavefunctions. Here we discuss a mechanical wave system capable of reproducing correctly a mechanical equivalent of a quantum system in a potential, and the resulting waveforms in principle of any form. We have successfully reproduced the finite potential well, the potential barrier and the parabolic potential. We believe that these mechanical waveforms can provide a valuable experience base for introductory students to start from. We aim to show that mechanical systems that are described with the same mathematics as quantum mechanical, indeed behave in the same way. We believe that even if treated purely as a wave phenomenon, the system provides much insight into wave mechanics. This can be especially useful for physics teachers and others who often need to resort to concepts and experience rather than mathematics when explaining physical phenomena.
In his famous novel The Caves of Steel, Isaac Asimov imagined a public transportation system based on a series of parallel moving walkways accelerating pedestrians progressively toward a high-speed central lane which continuously carries the crowds of the gigantic cities of our future Earth. In this paper, it is shown that the user of this system would face an interesting optimization problem, namely the design of the path which would minimize the travel time from one place to another. This problem is solved with the classical techniques of Lagrangian mechanics.
Visualisations of the flow of electromagnetic energy based on the time-averaged Poynting vector have yielded important and sometimes counter-intuitive physical insights in the case of electric circuits containing resistors and inductors. Less well-understood is the flow of electromagnetic energy in spatially contiguous media excited by grounded sources. In geophysics, for example, it is important for readers to recognise how geological structures help shape controlled-source electromagnetic responses. It is demonstrated herein using energy flow visualisations how a resistive layer impeding vertical electric current flow will produce a larger anomalous response to grounded-source excitation at the Earth's surface than an equivalent conductive layer.