Quantum-Inspired Evolutionary Algorithm (QEA) has been shown to be better performing than classical Genetic Algorithm based evolutionary techniques for combinatorial optimization problems like 0/1 knapsack problem. QEA uses quantum computing-inspired representation of solution called Q-bit individual consisting of Q-bits. The probability amplitudes of the Q-bits are changed by application of Q-gate operator, which is classical analogous of quantum rotation operator. The Q-gate operator is the only variation operator used in QEA, which along with some problem specific heuristic provides exploitation of the properties of the best solutions. In this paper, we analyzed the characteristics of the QEA for 0/1 knapsack problem and showed that a probability in the range 0.3 to 0.4 for the application of the Q-gate variation operator has the greatest likelihood of making a good balance between exploration and exploitation. Experimental results agree with the analytical finding.
This paper presents an efficient authentication method for JPEG images based on Genetic Algorithms (GA). The current authentication methods for JPEG images require the receivers to know the quantization table beforehand in order to authenticate the images. Moreover, the quantization tables used in the JPEG compression are different for different quality factors, thus increasing the burden on the receivers to maintain several quantization tables. We propose a novel GA-based method which possesses three advantages. First, the computation at the receiver end is simplified. Second, it is no more required for the receivers to maintain quantization tables. Third, the method is resistant against Vector Quantization (VQ) and Copy-Paste (CP) attacks by generating the authentication information which is unique with respect to each block and each image. Furthermore, we develop a two- level detection strategy to reduce the false acceptance ratio of invalid blocks. Experimental results show that the proposed GA-based method can successfully authenticate JPEG images under variant attacks.
This paper presents a novel distributed genetic algorithm (GA) architecture for the design of vector quantizers. The design is based on a multi-core architecture, where each island of the GA is associated with a hardware accelerator and a softcore processor for independent genetic evolutions. An on-chip RAM with a mutex circuit is adopted for the migration of genetic strings among different islands. This allows a simple and flexible migration for the implementation of hardware distributed GA. Experimental results shows that the proposed architecture has significantly lower computational time as compared with its software counterparts running on multicore processors with multithreading for GA-based optimization.
Linear discriminant analysis (LDA) for dimension reduction has been applied to a wide variety of problems such as face recognition. However, it has a major computational difficulty when the number of dimensions is greater than the sample size. In this paper, we propose a margin based criterion for linear dimension reduction that addresses the above problem associated with LDA. We establish an error bound for our proposed technique by showing its relation to least squares regression. In addition, there are well established numerical procedures such as semi-definite programming for optimizing the proposed criterion. We demonstrate the efficacy of our proposal and compare it against other competing techniques using a number of examples.
Due to rapid advances and availabilities of powerful image processing software, digital images are easy to manipulate and modify for ordinary people. This makes it more and more difficult for a viewer to check the authenticity of a given digital image. For digital photographs to be used as evidence in law issues or to be circulated in mass media, it is inevitably needed to identify whether an image is authentic or not. In this paper, we discuss the techniques of copy-cover image forgery and compare four detection methods for copy-cover forgery detection, which are based on PCA, DCT, spatial domain, and statistical domain. We investigate their effectiveness and sensitivity under the influences of Gaussian blurring and lossy JPEG compressions. It is concluded that the PCA method outperforms the others in terms of time complexity and accuracy. In JPEG compression simulation, its true positive rate is above 90% and false positive rate is above 99%. In Gaussian blurring simulation, its true positive rate is above 77% and false positive rate is above 99%.
A common task in dance, martial arts, animation, and many other movement genres is for the character to move in an innovative and yet stylistically consonant fashion. In this paper, we describe two mechanisms for automating this process and evaluate the results with a Turing Test. Our algorithms use the mathematics of chaos to achieve innovation and simple machine-learning techniques to enforce stylistic consonance. Because our goal is stylistic consonance, we used a Turing Test, rather than standard cross-validation-based approaches, to evaluate the results. This test indicated that the novel dance segments generated by these methods are nearing the quality of human-choreographed routines. The test-takers found the human-choreographed pieces to be more aesthetically pleasing than computer-choreographed pieces, but the computer-generated pieces were judged to be equally plausible and not significantly less graceful.
We consider the assignment of program tasks to processors in distributed computing systems such that system cost is minimized and resource constraints are satisfied. Several formulations for this task assignment problem (TAP) have been proposed in the literature. Most of these TAP formulations, however, are NP-complete and thus finding exact solutions is computationally intractable. Recently, some approximation methods like simulated annealing have been proposed, and simulation results exhibited the potential to solve the TAP using metaheuristics. In order to better understand the strengths and weaknesses of various metaheuristics applied to the TAP, we first propose two alternative metaheuristics- one using genetic algorithm and the other reinforcement learning algorithm-as well as their implementation details. Extensive computational evidences of the two heuristic algorithms against that of simulated annealing are presented, compared and discussed. Based on these experimental results, a hybrid strategy employing both metaheuristics is then proposed in order to solve the TAP more effectively and efficiently.
Two related and relatively obscure issues in science have eluded empirical tractability. Both can be directly traced to progress in artificial intelligence. The first is scientific proof of consciousness or otherwise in anything. The second is the role of consciousness in intelligent behaviour. This document approaches both issues by exploring the idea of using scientific behaviour self-referentially as a benchmark in an objective test for P-consciousness, which is the relevant critical aspect of consciousness. Scientific behaviour is unique in being both highly formalised and provably critically dependent on the P-consciousness of the primary senses. In the context of the primary senses P-consciousness is literally a formal identity with scientific observation. As such it is intrinsically afforded a status of critical dependency demonstrably no different to any other critical dependency in science, making scientific behaviour ideally suited to a self-referential scientific circumstance. The 'provability' derives from the delivery by science of objectively verifiable 'laws of nature'. By exploiting the critical dependency, an empirical framework is constructed as a refined and specialised version of existing propositions for a 'test for consciousness'. The specific role of P-consciousness is clarified: it is a human intracranial central nervous system construct that symbolically grounds the scientist in the distal external world, resulting in our ability to recognise, characterise and adapt to distal natural world novelty. It is hoped that in opening a discussion of a novel approach, the artificial intelligence community may eventually find a viable contender for its long overdue scientific basis.
Face verification is different from face identification task. Some traditional subspace methods that work well in face identification may suffer from severe over-fitting problem when applied for the verification task. Conventional discriminative methods such as linear discriminant analysis (LDA) and its variants are highly sensitive to the training data, which hinders them from achieving high verification accuracy. This work proposes an eigenspectrum model that alleviates the over-fitting problems by replacing the unreliable small and zero eigenvalues with the model values. It also enables the discriminant evaluation in the whole space to extract the low dimensional features effectively. The proposed approach is evaluated and compared with 8 popular subspace based methods for a face verification task. Experimental results on three face databases show that the proposed method consistently outperforms others.
In the early 1990s, computer scientists became motivated by the idea of rendering human-computer interactions more humanlike and natural for their users in order to both address complaints that technologies impose a mechanical (sometimes even anti-social) aesthetic to their everyday environment, and also investigate innovative ways to manage system-environment complexity. With the recent development of the field of Social Robotics and particularly Human- Robot Interaction, the integration of intentional emotional mechanisms in a system's control architecture becomes inevitable. Unfortunately, this presents significant issues that must be addressed for a successful emotional artificial system to be developed. This paper provides an additional dimension to documented arguments for and against the introduction of emotions into artificial systems by highlighting some fundamental paradoxes, mistakes, and proposes guidelines for how to develop successful affective intelligent social machines.
Visual and spatial representations seem to play a significant role in analogy. In this paper, we describe a specific role of visual representations: two situations that appear dissimilar non-visuospatially may appear similar when rerepresented visuospatially. We present a computational theory of analogy in which visuospatial re-representation enables analogical transfer in cases where there are ontological mismatches in the non-visuospatial representation. Realizing this theory in a computational model with specific data structures and algorithms first requires a computational model of visuospatial analogy, i.e., a model of analogy that only uses visuospatial knowledge. We have developed a computer program, called Galatea, which implements a core part of this model: it transfers problem-solving procedures between analogs that contain only visual and spatial knowledge. In this paper, we describe both how Galatea accomplishes analogical transfer using only visuospatial knowledge, and how it might be extended to support visuospatial re-representation of situations represented non-visually.
In this paper, we present an on-line learning neuro-fuzzy system which was inspired by parts of the mechanisms in immune systems. It illustrates how an on-line learning neuro-fuzzy system can capture the basic elements of the immune system and exhibit some of its appealing properties. During the learning procedure, a neuro-fuzzy system can be incrementally constructed. We illustrate the potential of the on-line learning neuro-fuzzy system on several benchmark classification problems and function approximation problems.
We present an algorithm that organizes a song repository upon recording a user's memory experiences from previous music listening activities. Our method forms an affectively annotated network of songs. The network's connections correspond to a person's recorded memory experiences related to song preferences when the person is at different states of affective bias. Upon formation of this network, an intelligent affect-sensitive network navigation algorithm synthesizes playlists that conform to desired affective states. The method for the network formation is highly individualized, in the sense that it takes in account an individual's music preferences which are typically subjective and may differ from user to user. Also, the method is content independent, in the sense that it does not rely or favor any particular music genre. In fact, the method is applicable to any type of media, not only songs. We implement our method and present evaluation results from the introspection of our algorithms' execution and from feedback recorded during the evaluation by human test subjects. The evaluation results clearly indicate that the proposed method significantly outperforms the most typical paradigm of random song selection.
Application of multimedia technologies to visual data, like still images and videos, is receiving an increasing attention especially for the large number of potential innovative solutions which are expected to emerge in the next years. In this context, techniques for retrieval by visual similarity are expected to boost the interest of users through the definition of novel paradigms to access digital repositories of visual data. In this paper, we define a novel model for active graph matching and describe its application to content based retrieval of images. The proposed solution fits with the class of edit distance based techniques and supports active node merging during the graph matching process. A theoretical analysis of the computational complexity of the proposed solution is presented and a prototype system is experimented on the images of two sample image collections.