A competency-based approach is one of the important tools of human resource management aimed at achieving strategic organisational goals and a competitive advantage. The article focuses on application of the competency-based approach in organisation in the Czech Republic. The first part of the article concentrates on the theoretical background. The second part evaluates the results of the quantitative survey. The aim of the article is to evaluate the competency-based approach in organisations in the Czech Republic and also to identify areas and activities in which the competency-based approach is applied and test dependencies between selected qualitative characteristics that relate to the issues examined. The results of the survey show that if organisations employ the competency-based approach (35.8%), they do not use individual activities within their frame on an equal basis. This is also valid for individual categories of employees (organisations concentrate in particular on managers and specialists).
Jones and Dzhafarov (2014) claim that several current models of speeded decision making in cognitive tasks, including the diffusion model, can be viewed as special cases of other general models or model classes. The general models can be made to match any set of response time (RT) distribution and accuracy data exactly by a suitable choice of parameters'and so are unfalsifiable. The implication of their claim is that models like the diffusion model are empirically testable only by artificially restricting them to exclude unfalsifiable instances of the general model. We show that Jones and Dzhafarov's argument depends on enlarging the class of "diffusion" models to include models in which there is little or no diffusion. The unfalsifiable models are deterministic or near-deterministic growth models, from which the effects of within-trial variability have been removed or in which they are constrained to be negligible. These models attribute most or all of the variability in RT and accuracy to across-trial variability in the rate of evidence growth, which is permitted to be distributed arbitrarily and to vary freely across experimental conditions. In contrast, in the standard diffusion model, within-trial variability in evidence is the primary determinant of variability in RT. Across-trial variability, which determines the relative speed of correct responses and errors, is theoretically and empirically constrained. Jones and Dzhafarov's attempt to include the diffusion model in a class of models that also includes deterministic growth models misrepresents and trivializes it and conveys a misleading picture of cognitive decision-making research.
Jones and Dzhafarov (2014) provided a useful service in pointing out that some assumptions of modern decision-making models require additional scrutiny. Their main result, however, is not surprising: If an infinitely complex model was created by assigning its parameters arbitrarily flexible distributions, this new model would be able to fit any observed data perfectly. Such a hypothetical model would be unfalsifiable. This is exactly why such models have never been proposed in over half a century of model development in decision making. Additionally, the main conclusion drawn from this result-that the success of existing decision-making models can be attributed to assumptions about parameter distributions-is wrong.
There have been numerous attempts to explain the enigma of autism, but existing neurocognitive theories often provide merely a refined description of 1 cluster of symptoms. Here we argue that deficits in executive functioning, theory of mind, and central coherence can all be understood as the consequence of a core deficit in the flexibility with which people with autism spectrum disorder can process violations to their expectations. More formally we argue that the human mind processes information by making and testing predictions and that the errors resulting from violations to these predictions are given a uniform, inflexibly high weight in autism spectrum disorder. The complex, fluctuating nature of regularities in the world and the stochastic and noisy biological system through which people experience it require that, in the real world, people not only learn from their errors but also need to (meta-)learn to sometimes ignore errors. Especially when. situations (e.g., social) or stimuli (e.g., faces) become too complex or dynamic, people need to tolerate a certain degree of error in order to develop a more abstract level of representation. Starling from an inability to flexibly process prediction errors, a number of seemingly core deficits become logically secondary symptoms. Moreover, an insistence on sameness or the acting out of stereotyped and repetitive behaviors can be understood as attempts to provide a reassuring sense of predictive success in a world otherwise filled with error.
Attitudes, theorized as behavioral guides, have long been a central focus of research in the social sciences. However, this theorizing reflects primarily Western philosophical views and empirical findings emphasizing the centrality of personal preferences. As a result, the prevalent psychological model of attitudes is a person-centric one. We suggest that incorporating research insights from non-Western sociocultural contexts can significantly enhance attitude theorizing. To this end, we propose an additional model--a normative-contextual model of attitudes. The currently dominant person-centric model emphasizes the centrality of personal preferences, their stability and internal consistency, and their possible interaction with externally imposed norms. In contrast, the normative-contextual model emphasizes that attitudes are always context-contingent and incorporate the views of others and the norms of the situation. In this model, adjustment to norms does not involve an effortful struggle between the authentic self and exogenous forces. Rather, it is the ongoing and reassuring integration of others' views into one's attitudes. According to the normative-contextual model, likely to be a good fit in contexts that foster interdependence and holistic thinking, attitudes need not be personal or necessarily stable and internally consistent and are only functional to the extent that they help one to adjust automatically to different contexts. The fundamental shift in focus offered by the normative-contextual model generates novel hypotheses and highlights new measurement criteria for studying attitudes in non-Western sociocultural contexts. We discuss these theoretical and measurement implications as well as practical implications for health and well-being, habits and behavior change, and global marketing.
Despite decades of research demonstrating a dedicated link between positive and negative affect and specific cognitive processes, not all research is consistent with this view. We present a new overarching theoretical account as an alternative-one that can simultaneously account for prior findings, generate new predictions, and encompass a wide range of phenomena. According to our proposed affect-ascognitive-feedback account, affective reactions confer value on accessible information processing strategies (e.g., global vs. local processing) and other responses, goals, concepts, and thoughts that happen to be accessible at the time. This view underscores that the relationship between affect and cognition is not fixed but, instead, is highly malleable. That is, the relationship between affect and cognitive processing can be altered, and often reversed, by varying the mental context in which it is experienced. We present evidence that supports this account, along with implications for specific affective states and other subjective experiences.
Recollection is currently modeled as a univariate retrieval process in which memory probes provoke conscious awareness of contextual details of earlier target presentations. However, that conception cannot explain why some manipulations that increase recollection in recognition experiments suppress false memory in false memory experiments, whereas others increase false memory. Such contrasting effects can be explained if recollection is bivariate-if memory probes can provoke conscious awareness of target items per se, separately from awareness of contextual details, with false memory being suppressed by the former but increased by the latter. Interestingly, these 2 conceptions of recollection have coexisted for some time in different segments of the memory literature. Independent support for the dualrecollection hypothesis is provided by some surprising effects that it predicts, such as release from recollection rejection, false persistence, negative relations between false alarm rates and target remember/ know judgments, and recollection without remembering. We.implemented the hypothesis in 3 bivariate recollection models, which differ in the degree to which recollection is treated as a discrete or a graded process: a pure multinomial model, a pure signal detection model, and a mixed multinomial/signal detection model. The models were applied to a large corpus of conjoint recognition data, with fits being satisfactory when both recollection processes were present and unsatisfactory when either was deleted. Factor analyses of the models' parameter spaces showed that target and context recollection never loaded on a common factor, and the 3 models converged on the same process loci for the effects of important experimental manipulations. Thus, a variety of results were consistent with bivariate recollection.
Homeostasis, the dominant explanatory framework for physiological regulation, has undergone significant revision in recent years, with contemporary models differing significantly from the original formulation. Allostasis, an alternative view of physiological regulation, goes beyond its homeostatic roots, offering novel insights relevant to our understanding and treatment of several chronic health conditions. Despite growing enthusiasm for allostasis, the concept remains diffuse, due in part to ambiguity as to how the term is understood and used, impeding meaningful translational and clinical research on allostasis. Here we provide a more focused understanding of homeostasis and allostasis by explaining how both play a role in physiological regulation, and a critical analysis of regulation suggests how homeostasis and allostasis can be distinguished. Rather than focusing on changes in the value of a regulated variable (e.g., body temperature, body adiposity, or reward), research investigating the activity and relationship among the multiple regulatory loops that influence the value of these regulated variables may be the key to distinguishing homeostasis and allostasis. The mechanisms underlying physiological regulation and dysregulation are likely to have important implications for health and disease.
Selective deficits in aphasics patients’ grammatical production and comprehension are often cited as evidence that syntactic processing is modular and localizable in discrete areas of the brain (e.g.,Y. Grodzinsky, 2000). The authors review a large body of experimental evidence suggesting that morphosyntactic deficits can be observed in a number of aphasic and neurologically intact populations. They present new data showing that receptive agrammatism is found not only over a range of aphasic groups, but is also observed in neurologically intact individuals processing under stressful conditions. The authors suggest that these data are most compatible with a domain-general account of language, one that emphasizes the interaction of linguistic distributions with the properties of an associative processor working under normal or suboptimal conditions.
Three questions have been prominent in the study of visual working memory limitations: (a) What is the nature of mnemonic precision (e.g., quantized or continuous)? (b) How many items are remembered? (c) To what extent do spatial binding errors account for working memory failures? Modeling studies have typically focused on comparing possible answers to a single one of these questions, even though the result of such a comparison might depend on the assumed answers to both others. Here, we consider every possible combination of previously proposed answers to the individual questions. Each model is then a point in a 3-factor model space containing a total of 32 models, of which only 6 have been tested previously. We compare all models on data from 10 delayed-estimation experiments from 6 laboratories (for a total of 164 subjects and 131,452 trials). Consistently across experiments, we find that (a) mnemonic precision is not quantized but continuous and not equal but variable across items and trials; (b) the number of remembered items is likely to be variable across trials, with a mean of 6.4 in the best model (median across subjects); (c) spatial binding errors occur but explain only a small fraction of responses (16.5% at set size 8 in the best model). We find strong evidence against all 6 documented models. Our results demonstrate the value of factorial model comparison in working memory.
Many influential memory models are computational in the sense that their predictions are derived through simulation. This means that it is difficult or impossible to write down a probability distribution or likelihood that characterizes the random behavior of the data as a function of the model's parameters. In turn, the lack of a likelihood means that these models cannot be directly fitted to data using traditional techniques. In particular, standard Bayesian analyses of such models are impossible.
Confidence in judgments is a fundamental aspect of decision making, and tasks that collect confidence judgments are an instantiation of multiple-choice decision making. We present a model for confidence judgments in recognition memory tasks that uses a multiple-choice diffusion decision process with separate accumulators of evidence for the different confidence choices. The accumulator that first reaches its decision boundary determines which choice is made. Five algorithms for accumulating evidence were compared, and one of them produced proportions of responses for each of the choices and full response time distributions for each choice that closely matched empirical data. With this algorithm, an increase in the evidence in one accumulator is accompanied by a decrease in the others so that the total amount of evidence in the system is constant. Application of the model to the data from an earlier experiment (Ratcliff, McKoon, & Tindall, 1994) uncovered a relationship between the shapes ofz-transformed receiver operating characteristics and the behavior of response time distributions. Both are explained in the model by the behavior of the decision boundaries. For generality, we also applied the decision model to a 3-choice motion discrimination task and found it accounted for data better than a competing class of models. The confidence model presents a coherent account of confidence judgments and response time that cannot be explained with currently popular signal detection theory analyses or dual-process models of recognition.
Recent articles, including Benjamin, Diaz, & Wee (2009), have argued that recognition memory may be better understood if consideration is given to sources of noise in thedecisions, as well as to those in therepresentations, underlying recognition judgments. They based that conclusion on a wide consideration of persisting mysteries in recognition research as well as a new experimental paradigm involvingensemble recognition. Kellen, Klauer, and Singmann (2012) reanalyzed the Benjamin et al. data and introduced their own new experimental paradigm to this debate. They concluded that criteria do not vary much from trial to trial in recognition testing, and thus that decision noise in recognition is small or nonexistent. However, their alternative interpretation of the Benjamin et al. data relies on a questionable conclusion to reject all models in which the locations of criteria were restricted to be the same across ensembles and a meta-assumption that a model should be rejected as false if it yields unconventional parameters. In addition, their experimental logic relies on the assumption that ranking tasks are always bias-free. Here I question these assumptions and suggest avenues for reconciliation between these contrasting claims.
The authors evaluated 4 sequential sampling models for 2-choice decisions—the Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter models—by fitting them to the response time (RT) distributions and accuracy data from 3 experiments. Each of the models was augmented with assumptions of variability across trials in the rate of accumulation of evidence from stimuli, the values of response criteria, and the value of base RT across trials. Although there was substantial model mimicry, empirical conditions were identified under which the models make discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed (exponential) distributions of criteria, although the last was unable to produce error RTs shorter than correct RTs. The relationship between these models and 3 recent, neurally inspired models was also examined.
The medial temporal lobe (MTL) has been studied extensively at all levels of analysis, yet its function remains unclear. Theory regarding the cognitive function of the MTL has centered along 3 themes. Different authors have emphasized the role of the MTL in episodic recall, spatial navigation, or relational memory. Starting with the temporal context model (M. W. Howard and M. J. Kahana, 2002), a distributed memory model that has been applied to benchmark data from episodic recall tasks, the authors propose that the entorhinal cortex supports a gradually changing representation of temporal context and the hippocampus proper enables retrieval of these contextual states. Simulation studies show this hypothesis explains the firing of place cells in the entorhinal cortex and the behavioral effects of hippocampal lesion in relational memory tasks. These results constitute a first step towards a unified computational theory of MTL function that integrates neurophysiological, neuropsychological and cognitive findings.
The diffusion model for 2-choice decisions (R.Ratcliff, 1978) was applied to data from lexical decision experiments in which word frequency, proportion of high- versus low-frequency words, and type of nonword were manipulated. The model gave a good account of all of the dependent variables—accuracy, correct and error response times, and their distributions—and provided a description of how the component processes involved in the lexical decision task were affected by experimental variables. All of the variables investigated affected the rate at which information was accumulated from the stimuli—calleddrift ratein the model. The different drift rates observed for the various classes of stimuli can all be explained by a 2-dimensional signal-detection representation of stimulus information. The authors discuss how this representation and the diffusion model’s decision process might be integrated with current models of lexical access.
A new explanation is proposed for a long standing question in psycholinguistics: Why are some reduced relative clauses so difficult to comprehend? It is proposed that the meanings of some verbs likeraceare incompatible with the meaning of the reduced relative clause and that this incompatibility makes sentences likeThe horse raced past the barn fellunacceptable. In support of their hypotheses, the authors show that reduced relatives ofThe horse raced past the barn felltype occur in naturally produced sentences with a near-zero probability, whereas reduced relatives with other verbs occur with a probability of about 1 in 20. The authors also support the hypotheses with a number of psycholinguistic experiments and corpus studies.
Much recent research has aimed to establish whether visual working memory (WM) is better characterized by a limited number of discrete all-or-none slots or by a continuous sharing of memory resources. To date, however, researchers have not considered the response-time (RT) predictions of discrete-slots versus shared-resources models. To complement the past research in this field, we formalize a family of mixed-state, discrete-slots models for explaining choice and RTs in tasks of visual WM change detection. In the tasks under investigation, a small set of visual items is presented, followed by a test item in 1 of the studied positions for which a change judgment must be made. According to the models, if the studied item in that position is retained in 1 of the discrete slots, then a memory-based evidence-accumulation process determines the choice and the RT; if the studied item in that position is missing, then a guessing-based accumulation process operates. Observed RT distributions are therefore theorized to arise as probabilistic mixtures of the memory-based and guessing distributions. We formalize an analogous set of continuous shared-resources models. The model classes are tested on individual subjects with both qualitative contrasts and quantitative fits to RT-distribution data. The discrete-slots models provide much better qualitative and quantitative accounts of the RT and choice data than do the shared-resources models, although there is some evidence for “slots plus resources” when memory set size is very small.
Individually, visual neurons are each selective for several aspects of stimulation, such as stimulus location, frequency content, and speed. Collectively, the neurons implement the visual system’s preferential sensitivity to some stimuli over others, manifested in behavioral sensitivity functions. We ask how the individual neurons are coordinated to optimize visual sensitivity. We model synaptic plasticity in a generic neural circuit, and find that stochastic changes in strengths of synaptic connections entail fluctuations in parameters of neural receptive fields. The fluctuations correlate with uncertainty of sensory measurement in individual neurons: the higher the uncertainty the larger the amplitude of fluctuation. We show that this simple relationship is sufficient for the stochastic fluctuations to steer sensitivities of neurons toward a characteristic distribution, from which follows a sensitivity function observed in human psychophysics, and which is predicted by a theory of optimal allocation of receptive fields. The optimal allocation arises in our simulations without supervision or feedback about system performance and independently of coupling between neurons, making the system highly adaptive and sensitive to prevailing stimulation.
Infants segment words from fluent speech during the same period when they are learning phonetic categories, yet accounts of phonetic category acquisition typically ignore information about the words in which sounds appear. We use a Bayesian model to illustrate how feedback from segmented words might constrain phonetic category learning by providing information about which sounds occur together in words. Simulations demonstrate that word-level information can successfully disambiguate overlapping English vowel categories. Learning patterns in the model are shown to parallel human behavior from artificial language learning tasks. These findings point to a central role for the developing lexicon in phonetic category acquisition and provide a framework for incorporating top-down constraints into models of category learning.