, , and have become ubiquitous in today's mass media and are universally known terms used in everyday speech. If we look behind these often misused buzzwords, we find at least one common element, namely data. Although we hardly use these terms in the “classic discipline” of mineral economics, we find various similarities. The case of phosphate data bears numerous challenges in multiple forms such as uncertainties, fuzziness, or misunderstandings. Often simulation models are used to support decision-making processes. For all these models, reliable and accurate sets of data are an essential premise. A significant number of data series relating to the phosphorus supply chain, including resource inventory or production, consumption, and trade data ranging from phosphate rock to intermediates like marketable concentrate to final phosphate fertilizers, is available. Data analysts and modelers must often choose from various sources, and they also depend on data access. Based on a transdisciplinary orientation, we aim to help colleagues in all fields by illustrating quantitative differences among the reported data, taking a somewhat engineering approach. We use common descriptive statistics to measure and causally explain discrepancies in global phosphate-rock production data issued by the US Geological Survey, the British Geological Survey, Austrian World Mining Data, the International Fertilizer Association, and CRU International over time, with a focus on the most recent years. Furthermore, we provide two snapshots of global-trade flows for phosphate-rock concentrate, in 2015 and 1985, and compare these to an approach using total-nutrient data. We find discrepancies of up to 30% in reported global production volume, whereby the major share could be assigned directly to China and Peru. Consequently, we call for a global, independent agency to collect and monitor phosphate data in order to reduce uncertainties or fuzziness and, thereby, ultimately support policy-making processes.
The astonishing propagation of microfinance institutions (MFIs) around the world has been followed by an indiscriminate proliferation of concepts for describing these organizations. These have in common the tendency to overlook the historical roots of microfinance, to disregard some types of MFIs, to impose arbitrarily discrete categories over a non-uniform field, and to neglect important constitutive attributes inherent to all MFIs. This conceptual fuzziness brings about several theoretical and practical obstacles. In this paper we address this issue by providing a two-dimensional framework built on the five constitutive attributes inherent to all MFIs to reduce microfinance conceptual blurriness. In doing so, we deliver a threefold contribution: 1) We address the call to reduce the conceptual fuzziness within the microfinance field by providing a tool for characterizing and distinguishing between the different MFIs based on their constitutive attributes across this industry. In addition, we advance the growing literature on microfinance that considers MFIs as hybrid organizations. 2) By exposing these five attributes, we dislocate the focus of policy makers from one idealistic (and limiting) best model of MFIs to account for a more diverse range of organizational configurations which provides the possibility of a better fit for their specific target public and context. 3) Finally we expose how the different types of microfinance can foster sustainable development.
The first aim is to emphasize the use of fuzziness in data analysis to capture information that has been traditionally disregarded with a cost in the precision of the conclusions. Fuzziness can be considered in the data analysis process at various stages, but the main target in this paper will be fuzziness in the data. Depending on the nature of the fuzzy data or the aim to which they are handled, different approaches should be applied. We attempt to contribute to the clarification of such a difference while focusing on the so-called ontic approach in contrast to the epistemic approach. The second aim is to underline the need of considering robust methods to reduce the misleading impact of outliers in fuzzy data analysis. We propose trimming as a general and intuitive method to discard outliers. We exemplify this approach with the case of the ontic fuzzy trimmed mean/variance and highlight the differences with the epistemic case. All the discussions and developments are illustrated by means of a case-study concerning the perception of lengths of men and women.
The wide usage of relational databases motivated researchers to develop more user friendly interfaces which would allow a larger population of users to access databases. Such interfaces range from visual to natural language based. This paper contributes a question driven query model which falls under the natural language based category. The proposed model supports fuzziness where every user is given the freedom to define his/her own understanding of fuzzy terms. The developed system captures the fuzzy understanding of each user to utilize it while deciding on the result to be communicated back as answer to a raised question. Data mining techniques are employed to guide users in defining their fuzzy understanding. The developed model is intended to help users to retrieve data from a relational database without expecting them to know SQL. The system handles different types of questions, including (1) simple questions, (2) complex questions with inner joins and where conditions, (3) questions that involve usage of aggregate functions (e.g., min, max, etc.), and (4) questions with fuzzy terms. The reported test results demonstrate the effectiveness and applicability of the developed system in handling various types of questions raised by a heterogeneous set of users ranging from professional to naive.
Semi-supervised learning can be described from different perspectives, which plays a crucial role in the study of machine learning. In this study, a new aspect of semi-supervised learning is explored by investigating the divide-and-conquer strategy based on fuzziness to improve the performance of classifiers. In such an approach, adding a category of samples with low fuzziness in the training set can improve the training accuracy, which is experimentally confirmed and explained in the theory of learning from noisy data. The significance of initial accuracy of a base classifier in improving classifier’s performance is further studied. It is observed that the initial accuracy of a base classifier has a significant impact on the improvement of classifier’s performance. Experimental results exhibit that the improvement of accuracy, which is sensitive to the base classifier, attains its maximum when the initial accuracy is between 70% and 80%.
The qualities of new data used in the sequential learning phase of the online sequential extreme learning machine algorithm (OS-ELM) have a significant impact on the performance of OS-ELM. This paper proposes a novel data filter mechanism for OS-ELM from the perspective of fuzziness and a fuzziness-based online sequential extreme learning machine algorithm (FOS-ELM). In FOS-ELM, when new data arrive, a fuzzy classifier first picks out the meaningful data according to the fuzziness of each sample. Specifically, the new samples with high-output fuzziness are selected and then used in sequential learning. The experimental results on eight binary classification problems and three multiclass classification problems have shown that FOS-ELM updated by the new samples with high-output fuzziness has better generalization performance than OS-ELM. Since the unimportant data are discarded before sequential learning, FOS-ELM can save more memory and have higher computational efficiency. In addition, FOS-ELM can handle data one-by-one or chunk-by-chunk with fixed or varying sizes. The relationship between the fuzziness of new samples and the model performance is also studied in this paper, which is expected to provide some useful guidelines for improving the generalization ability of online sequential learning algorithms.
We show that the uncertainty in distance and time measurements found by the heuristic combination of quantum mechanics and general relativity is reproduced in a purely classical and flat multi-fractal spacetime whose geometry changes with the probed scale (dimensional flow) and has non-zero imaginary dimension, corresponding to a discrete scale invariance at short distances. Thus, dimensional flow can manifest itself as an intrinsic measurement uncertainty and, conversely, measurement-uncertainty estimates are generally valid because they rely on this universal property of quantum geometries. These general results affect multi-fractional theories, a recent proposal related to quantum gravity, in two ways: they can fix two parameters previously left free (in particular, the value of the spacetime dimension at short scales) and point towards a reinterpretation of the ultraviolet structure of geometry as a stochastic foam or fuzziness. This is also confirmed by a correspondence we establish between Nottale scale relativity and the stochastic geometry of multi-fractional models.
This study presents an Interval-based Fuzzy Chance-constrained Irrigation Water Allocation (IFCIWA) model with double-sided fuzziness for supporting irrigation water management. It is derived from incorporating double-sided chance-constrained programming (DFCCP) into an interval parameter programming (IPP) framework. The model integrates interval linear crop water production functions into its general framework for irrigation water allocation. Moreover, it can deal with uncertainties presented as discrete intervals and fuzziness. It can also allow violation of system constraints with double-sided fuzziness, where each confidence level consists of two reliability scenarios (i.e. minimum and maximum reliability scenarios). To demonstrate its applicability, the model is then applied to a case study in the middle reaches of the Heihe River Basin, northwest China. Therefore, optimal solutions have been generated for irrigation water allocation under uncertainty. The results indicate that planning under a lower confidence level and a minimum reliability scenario can provide maximized system benefits. System benefits under the high water level are [2.659, 7.913] × 10 Yuan when , [2.650, 7.822] × 10 Yuan when and [2.642, 7.734] × 10 Yuan when under the minimum reliability scenario. Furthermore, the results can support in-depth analysis of interrelationships among system benefits, confidence levels, reliability levels and risk levels. These results can effectively provide decision-support for managers identifying desired irrigation water allocation plans in study area.
This comprehensive, bird's view research note combines the state of the art, a brief presentation of the history and some original solutions, and position like views of some prospective future developments of one of the most relevant and interesting areas related to the use of fuzzy logic in database management systems, notably in its querying component, and – to some extent – in a broader issue of data and information management. We briefly summarize the roots of those new applications of fuzzy logic, more relevant proposals and development in the context of fuzzification of the basic relational database model, and then some of its further generalizations. We particularly focus on fuzzy querying as a human consistent and friendly way of retrieving information due to real human intentions and preferences expressed in natural language represented via fuzzy logic and possibility theory. We mention some extensions, notably fuzzy queries with linguistic quantifiers, and point their close relation to linguistic summaries. As for newer, prospective developments, we mainly focus on bipolar queries that can accomodate the users' intentions and preferences involving some sort of a required and desired, mandatory and optional, etc. conditions. We show various ways of handling such queries. We conclude with some brief position statements of our view on relevant and promising directions, and challenges.