With the purpose of identifying cyber threats and possible incidents, intrusion detection systems (IDSs) are widely deployed in various computer networks. In order to enhance the detection capability of a single IDS, collaborative intrusion detection networks (or collaborative IDSs) have been developed, which allow IDS nodes to exchange data with each other. However, data and trust management still remain two challenges for current detection architectures, which may degrade the effectiveness of such detection systems. In recent years, blockchain technology has shown its adaptability in many fields, such as supply chain management, international payment, interbanking, and so on. As blockchain can protect the integrity of data storage and ensure process transparency, it has a potential to be applied to intrusion detection domain. Motivated by this, this paper provides a review regarding the intersection of IDSs and blockchains. In particular, we introduce the background of intrusion detection and blockchain, discuss the applicability of blockchain to intrusion detection, and identify open challenges in this direction.
To meet the fast-growing energy demand and, at the same time, tackle environmental concerns resulting from conventional energy sources, renewable energy sources are getting integrated in power networks to ensure reliable and affordable energy for the public and industrial sectors. However, the integration of renewable energy in the ageing electrical grids can result in new risks/challenges, such as security of supply, base load energy capacity, seasonal effects, and so on. Recent research and development in microgrids have proved that microgrids, which are fueled by renewable energy sources and managed by smart grids (use of smart sensors and smart energy management system), can offer higher reliability and more efficient energy systems in a cost-effective manner. Further improvement in the reliability and efficiency of electrical grids can be achieved by utilizing dc distribution in microgrid systems. DC microgrid is an attractive technology in the modern electrical grid system because of its natural interface with renewable energy sources, electric loads, and energy storage systems. In the recent past, an increase in research work has been observed in the area of dc microgrid, which brings this technology closer to practical implementation. This paper presents the state-of-the-art dc microgrid technology that covers ac interfaces, architectures, possible grounding schemes, power quality issues, and communication systems. The advantages of dc grids can be harvested in many applications to improve their reliability and efficiency. This paper also discusses benefits and challenges of using dc grid systems in several applications. This paper highlights the urgent need of standardizations for dc microgrid technology and presents recent updates in this area.
The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.
The tremendous success of machine learning algorithms at image recognition tasks in recent years intersects with a time of dramatically increased use of electronic medical records and diagnostic imaging. This review introduces the machine learning algorithms as applied to medical image analysis, focusing on convolutional neural networks, and emphasizing clinical aspects of the field. The advantage of machine learning in an era of medical big data is that significant hierarchal relationships within the data can be discovered algorithmically without laborious hand-crafting of features. We cover key research areas and applications of medical image classification, localization, detection, segmentation, and registration. We conclude by discussing research obstacles, emerging trends, and possible future directions.
This paper investigates the coexistence between two key enabling technologies for fifth generation (5G) mobile networks, non-orthogonal multiple access (NOMA), and millimeter-wave (mmWave) communications. Particularly, the application of random beamforming to mmWave-NOMA systems is considered in order to avoid the requirement that the base station know all the users' channel state information. Stochastic geometry is used to characterize the performance of the proposed mmWave-NOMA transmission scheme by using key features of mmWave systems, i.e., that mmWave transmission is highly directional and potential blockages will thin the user distribution. Two random beamforming approaches that can further reduce the system overhead are also proposed, and their performance is studied analytically in terms of sum rates and outage probabilities. Simulation results are also provided to demonstrate the performance of the proposed schemes and verify the accuracy of the developed analytical results.
Cyber-physical system (CPS) is a new trend in the Internet-of-Things related research works, where physical systems act as the sensors to collect real-world information and communicate them to the computation modules (i.e. cyber layer), which further analyze and notify the findings to the corresponding physical systems through a feedback loop. Contemporary researchers recommend integrating cloud technologies in the CPS cyber layer to ensure the scalability of storage, computation, and cross domain communication capabilities. Though there exist a few descriptive models of the cloud-based CPS architecture, it is important to analytically describe the key CPS properties: computation, control, and communication. In this paper, we present a digital twin architecture reference model for the cloud-based CPS, C2PS, where we analytically describe the key properties of the C2PS. The model helps in identifying various degrees of basic and hybrid computation-interaction modes in this paradigm. We have designed C2PS smart interaction controller using a Bayesian belief network, so that the system dynamically considers current contexts. The composition of fuzzy rule base with the Bayes network further enables the system with reconfiguration capability. We also describe analytically, how C2PS subsystem communications can generate even more complex system-of-systems. Later, we present a telematics-based prototype driving assistance application for the vehicular domain of C2PS, VCPS, to demonstrate the efficacy of the architecture reference model.
Fog computing-enhanced Internet of Things (IoT) has recently received considerable attention, as the fog devices deployed at the network edge can not only provide low latency, location awareness but also improve real-time and quality of services in IoT application scenarios. Privacy-preserving data aggregation is one of typical fog computing applications in IoT, and many privacy-preserving data aggregation schemes have been proposed in the past years. However, most of them only support data aggregation for homogeneous IoT devices, and cannot aggregate hybrid IoT devices' data into one in some real IoT applications. To address this challenge, in this paper, we present a lightweight privacy-preserving data aggregation scheme, called Lightweight Privacy-preserving Data Aggregation, for fog computing-enhanced IoT. The proposed LPDA is characterized by employing the homomorphic Paillier encryption, Chinese Remainder Theorem, and one-way hash chain techniques to not only aggregate hybrid IoT devices' data into one, but also early filter injected false data at the network edge. Detailed security analysis shows LPDA is really secure and privacy-enhanced with differential privacy techniques. In addition, extensive performance evaluations are conducted, and the results indicate LPDA is really lightweight in fog computing-enhanced IoT.
Fog computing, an extension of cloud computing services to the edge of the network to decrease latency and network congestion, is a relatively recent research trend. Although both cloud and fog offer similar resources and services, the latter is characterized by low latency with a wider spread and geographically distributed nodes to support mobility and real-time interaction. In this paper, we describe the fog computing architecture and review its different services and applications. We then discuss security and privacy issues in fog computing, focusing on service and resource availability. Virtualization is a vital technology in both fog and cloud computing that enables virtual machines (VMs) to coexist in a physical server (host) to share resources. These VMs could be subject to malicious attacks or the physical server hosting it could experience system failure, both of which result in unavailability of services and resources. Therefore, a conceptual smart pre-copy live migration approach is presented for VM migration. Using this approach, we can estimate the downtime after each iteration to determine whether to proceed to the stop-and-copy stage during a system failure or an attack on a fog computing node. This will minimize both the downtime and the migration time to guarantee resource and service availability to the end users of fog computing. Last, future research directions are outlined.
The combination of tomographic imaging and deep learning, or machine learning in general, promises to empower not only image analysis but also image reconstruction. The latter aspect is considered in this perspective article with an emphasis on medical imaging to develop a new generation of image reconstruction theories and techniques. This direction might lead to intelligent utilization of domain knowledge from big data, innovative approaches for image reconstruction, and superior performance in clinical and preclinical applications. To realize the full impact of machine learning for tomographic imaging, major theoretical, technical and translational efforts are immediately needed.
The achievable performance of subcarrier-index modulation (SIM) is analyzed in terms of its minimum Euclidean distance, constrained and unconstrained average mutual information, as well as its peak-to-average power ratio (PAPR). Our performance investigations identify the beneficial operating region of the SIM scheme over its conventional orthogonal frequency-division multiplexing (OFDM) counterpart, hence providing general design guidelines for the SIM parameters. More specifically, an SIM scheme is shown to be beneficial for the scenario of a relatively low transmission rate below 2 b/s/Hz. In addition, we demonstrate that the PAPR of the SIM scheme is comparable with that of its OFDM counterpart under the idealized simplifying assumption of having Gaussian input symbols.
For this paper, to consider both Bonferroni mean (BM) operator and two-tuple linguistic Pythagorean fuzzy numbers (2TLPFNs), we combine the weighted BM (WBM) operator, the generalized WBM (GWBM) operator, and the dual GWBM operator with 2TLPFNs to propose the two-tuple linguistic Pythagorean WBM (2TLPFWBM) operator, the 2TLPFWGBM operator, the generalized 2TLPFWBM (G2TLPFWBM) operator, the generalized 2TLPFWGBM operator, the dual G2TLPFWBM operator, and the dual G2TLPFWGBM operator. Then, some MADM procedures are developed based on these operators. At last, an applicable example for a safety assessment of construction project is given.
Fully convolutional neural networks (FCNs) have been shown to achieve the state-of-the-art performance on the task of classifying time series sequences. We propose the augmentation of fully convolutional networks with long short term memory recurrent neural network (LSTM RNN) sub-modules for time series classification. Our proposed models significantly enhance the performance of fully convolutional networks with a nominal increase in model size and require minimal preprocessing of the data set. The proposed long short term memory fully convolutional network (LSTM-FCN) achieves the state-of-the-art performance compared with others. We also explore the usage of attention mechanism to improve time series classification with the attention long short term memory fully convolutional network (ALSTM-FCN). The attention mechanism allows one to visualize the decision process of the LSTM cell. Furthermore, we propose refinement as a method to enhance the performance of trained models. An overall analysis of the performance of our model is provided and compared with other techniques.
Since the early 1990s, a large number of chaos-based communication systems have been proposed exploiting the properties of chaotic waveforms. The motivation lies in the significant advantages provided by this class of non-linear signals. For this aim, many communication schemes and applications have been specially designed for chaos-based communication systems where energy, data rate, and synchronization awareness are considered in most designs. Recently, the major focus, however, has been given to the non-coherent chaos-based systems to benefit from the advantages of chaotic signals and non-coherent detection and to avoid the use of chaotic synchronization, which suffers from weak performance in the presence of additive noise. This paper presents a comprehensive survey of the entire wireless radio frequency chaos-based communication systems. First, it outlines the challenges of chaos implementations and synchronization methods, followed by comprehensive literature review and analysis of chaos-based coherent techniques and their applications. In the second part of the survey, we offer a taxonomy of the current literature by focusing on non-coherent detection methods. For each modulation class, this paper categorizes different transmission techniques by elaborating on its modulation, receiver type, data rate, complexity, energy efficiency, multiple access scheme, and performance. In addition, this survey reports on the analysis of tradeoff between different chaos-based communication systems. Finally, several concluding remarks are discussed.
Presently, educational institutions compile and store huge volumes of data, such as student enrolment and attendance records, as well as their examination results. Mining such data yields stimulating information that serves its handlers well. Rapid growth in educational data points to the fact that distilling massive amounts of data requires a more sophisticated set of algorithms. This issue led to the emergence of the field of educational data mining (EDM). Traditional data mining algorithms cannot be directly applied to educational problems, as they may have a specific objective and function. This implies that a preprocessing algorithm has to be enforced first and only then some specific data mining methods can be applied to the problems. One such preprocessing algorithm in EDM is clustering. Many studies on EDM have focused on the application of various data mining algorithms to educational attributes. Therefore, this paper provides over three decades long (1983-2016) systematic literature review on clustering algorithm and its applicability and usability in the context of EDM. Future insights are outlined based on the literature reviewed, and avenues for further research are identified.
This paper presents a complete approach to a successful utilization of a high-performance extreme learning machines (ELMs) Toolbox for Big Data. It summarizes recent advantages in algorithmic performance; gives a fresh view on the ELM solution in relation to the traditional linear algebraic performance; and reaps the latest software and hardware performance achievements. The results are applicable to a wide range of machine learning problems and thus provide a solid ground for tackling numerous Big Data challenges. The included toolbox is targeted at enabling the full potential of ELMs to the widest range of users.
In recent decades, we have witnessed the evolution of information technologies from the development of VLSI technologies, to communication and networking infrastructure, to the standardization of multimedia compression and coding schemes, to effective multimedia content search and retrieval. As a result, multimedia devices and digital content have become ubiquitous. This path of technological evolution has naturally led to a critical issue that must be addressed next, namely, to ensure that content, devices, and intellectual property are being used by authorized users for legitimate purposes, and to be able to forensically prove with high confidence when otherwise. When security is compromised, intellectual rights are violated, or authenticity is forged, forensic methodologies and tools are employed to reconstruct what has happened to digital content in order to answer who has done what, when, where, and how. The goal of this paper is to provide an overview on what has been done over the last decade in the new and emerging field of information forensics regarding theories, methodologies, state-of-the-art techniques, major applications, and to provide an outlook of the future.
For task-scheduling problems in cloud computing, a multi-objective optimization method is proposed here. First, with an aim toward the biodiversity of resources and tasks in cloud computing, we propose a resource cost model that defines the demand of tasks on resources with more details. This model reflects the relationship between the user's resource costs and the budget costs. A multi-objective optimization scheduling method has been proposed based on this resource cost model. This method considers the makespan and the user's budget costs as constraints of the optimization problem, achieving multi-objective optimization of both performance and cost. An improved ant colony algorithm has been proposed to solve this problem. Two constraint functions were used to evaluate and provide feedback regarding the performance and budget cost. These two constraint functions made the algorithm adjust the quality of the solution in a timely manner based on feedback in order to achieve the optimal solution. Some simulation experiments were designed to evaluate this method's performance using four metrics: 1) the makespan; 2) cost; 3) deadline violation rate; and 4) resource utilization. Experimental results show that based on these four metrics, a multi-objective optimization method is better than other similar methods, especially as it increased 56.6% in the best case scenario.
In this paper, we investigate the dual hesitant bipolar fuzzy multiple attribute decision making problems in which there exists a prioritization relationship over attributes. Then, motivated by the idea of Hamacher operations and prioritized aggregation operators, we have developed some Hamacher prioritized aggregation operators for aggregating dual hesitant bipolar fuzzy information: dual hesitant bipolar fuzzy Hamacher prioritized average operator, dual hesitant bipolar fuzzy Hamacher prioritized geometric operator, dual hesitant bipolar fuzzy Hamacher prioritized weighted average operator, dual hesitant bipolar fuzzy Hamacher prioritized weighted geometric operator. Then, we have utilized these operators to develop some approaches to solve the dual hesitant bipolar fuzzy multiple attribute decision making problems. Finally, a real-world example is then analyzed to illustrate the relevance and effectiveness of the proposed methodology.
With the developments and applications of the new information technologies, such as cloud computing, Internet of Things, big data, and artificial intelligence, a smart manufacturing era is coming. At the same time, various national manufacturing development strategies have been put forward, such as Industry 4.0 , Industrial Internet , manufacturing based on Cyber-Physical System , and Made in China 2025 . However, one of specific challenges to achieve smart manufacturing with these strategies is how to converge the manufacturing physical world and the virtual world, so as to realize a series of smart operations in the manufacturing process, including smart interconnection, smart interaction, smart control and management, etc. In this context, as a basic unit of manufacturing, shop-floor is required to reach the interaction and convergence between physical and virtual spaces, which is not only the imperative demand of smart manufacturing, but also the evolving trend of itself. Accordingly, a novel concept of digital twin shop-floor (DTS) based on digital twin is explored and its four key components are discussed, including physical shop-floor, virtual shop-floor, shop-floor service system, and shop-floor digital twin data. What is more, the operation mechanisms and implementing methods for DTS are studied and key technologies as well as challenges ahead are investigated, respectively.
A 3-D Shenzhen City Web platform based on the Web virtual reality geographical information system is presented. A 3-D global browser is employed to load multiple types of demand data from the city, such as 3-D building model data, residents' information, and real-time and historical traffic data. Using these data, a 3-D analysis and visualization of the city's information are conducted on a platform. The amount of information that can be visualized with this platform is very large, and the GIS-based navigational scheme enables great flexibility to access different available data sources. All the presented functions of the platform are extracted from the customers' practical demand. The system design considers some existing geographic human-computer interaction research results.