This paper investigates a relationship between the fuzziness of a classifier and the misclassification rate of the classifier on a group of samples. For a given trained classifier that outputs a membership vector, we demonstrate experimentally that samples with higher fuzziness outputted by the classifier mean a bigger risk of misclassification. We then propose a fuzziness category based divide-and-conquer strategy which separates the high-fuzziness samples from the low fuzziness samples. A particular technique is used to handle the high-fuzziness samples for promoting the classifier performance. The reasonability of the approach is theoretically explained and its effectiveness is experimentally demonstrated.
We investigate essential relationships between generalization capabilities and fuzziness of fuzzy classifiers (viz., the classifiers whose outputs are vectors of membership grades of a pattern to the individual classes). The study makes a claim and offers sound evidence behind the observation that higher fuzziness of a fuzzy classifier may imply better generalization aspects of the classifier, especially for classification data exhibiting complex boundaries. This observation is not intuitive with a commonly accepted position in "traditional" pattern recognition. The relationship that obeys the conditional maximum entropy principle is experimentally confirmed. Furthermore, the relationship can be explained by the fact that samples located close to classification boundaries are more difficult to be correctly classified than the samples positioned far from the boundaries. This relationship is expected to provide some guidelines as to the improvement of generalization aspects of fuzzy classifiers.
An integrated approach to truth-gaps and epistemic uncertainty is described, based on probability distributions defined over a set of three-valued truth models. This combines the explicit representation of borderline cases with both semantic and stochastic uncertainty, in order to define measures of subjective belief in vague propositions. Within this framework we investigate bridges between probability theory and fuzziness in a propositional logic setting. In particular, when the underlying truth model is from Kleene's three-valued logic then we provide a complete characterisation of compositional min–max fuzzy truth degrees. For classical and supervaluationist truth models we find partial bridges, with min and max combination rules only recoverable on a fragment of the language. Across all of these different types of truth valuations, min–max operators are resultant in those cases in which there is uncertainty about the relative sharpness or vagueness of the interpretation of the language.
The qualities of new data used in the sequential learning phase of the online sequential extreme learning machine algorithm (OS-ELM) have a significant impact on the performance of OS-ELM. This paper proposes a novel data filter mechanism for OS-ELM from the perspective of fuzziness and a fuzziness-based online sequential extreme learning machine algorithm (FOS-ELM). In FOS-ELM, when new data arrive, a fuzzy classifier first picks out the meaningful data according to the fuzziness of each sample. Specifically, the new samples with high-output fuzziness are selected and then used in sequential learning. The experimental results on eight binary classification problems and three multiclass classification problems have shown that FOS-ELM updated by the new samples with high-output fuzziness has better generalization performance than OS-ELM. Since the unimportant data are discarded before sequential learning, FOS-ELM can save more memory and have higher computational efficiency. In addition, FOS-ELM can handle data one-by-one or chunk-by-chunk with fixed or varying sizes. The relationship between the fuzziness of new samples and the model performance is also studied in this paper, which is expected to provide some useful guidelines for improving the generalization ability of online sequential learning algorithms.
The existing methods of determining an α-cut of a fuzzy set to construct its underlying shadowed set do not fully comply with the concept of shadowed sets, namely, a retention of the total amount of fuzziness and its localized redistribution throughout a universe of discourse. Moreover, no closed formula to calculate the corresponding α-cut is available. This paper proposes analytical formulas to calculate threshold values required in the construction of shadowed sets. We introduce a new algorithm to design a shadowed set from a given fuzzy set. The proposed algorithm, which adheres to the main premise of shadowed sets of capturing the essence of fuzzy sets, helps localize fuzziness present in a given fuzzy set. We represent the fuzziness of a fuzzy set as a gradual number. Through defuzzification of the gradual number of fuzziness, we determine the required threshold (i.e., some α-cut) used in the formation of the shadowed set. We show that the shadowed set obtained in this way comes with a measure of fuzziness that is equal to the one characterizing the original fuzzy set.
This contribution deals with developments in the history of philosophy, logic, and mathematics during the time before and up to the beginning of fuzzy logic. Even though the term “fuzzy” was introduced by Lotfi A. Zadeh in 1964/1965, it should be noted that older concepts of “vagueness” and “haziness” had previously been discussed in philosophy, logic, mathematics, applied sciences, and medicine. This paper delineates some specific paths through the history of the use of these “loose concepts”. Vagueness was avidly discussed in the fields of logic and philosophy during the first decades of the 20th century—particularly in Vienna, at Cambridge and in Warsaw and Lvov. An interesting sequel to these developments can be seen in the work of the Polish physician and medical philosopher Ludwik Fleck. Haziness and fuzziness were concepts of interest in mathematics and engineering during the second half of the 1900s. The logico-philosophical history presented here covers the work of Bertrand Russell, Max Black, and others. The mathematical–technical history deals with the theories founded by Karl Menger and Lotfi Zadeh. Menger's concepts of probabilistic metrics, hazy sets (ensembles flous) and micro-geometry as well as Zadeh's theory of fuzzy sets paved the way for the establishment of soft computing methods using vague concepts that connote the nonexistence of sharp boundaries.
The railway freight transportation planning problem under the mixed uncertain environment of fuzziness and randomness is investigated in this paper, in which the optimal paths, the amount of commodities passing through each path and the frequency of services need to be determined. Based on the chance measure and critical values of the random fuzzy variable, three chance-constrained programming models are constructed for the problem with respect to different criteria. Some equivalents of objectives and constraints are also discussed in order to investigate mathematical properties of the models. To solve the models, a potential path searching algorithm, simulation algorithms and a genetic algorithm are integrated as a hybrid algorithm to solve an optimal solution. Finally, some numerical examples are performed to show the applications of the models and the algorithm.