Cloud computing is now a well-consolidated paradigm for on-demand services provisioning on a pay-as-you-go model. Elasticity, one of the major benefits required for this computing model, is the ability to add and remove resources “on the fly” to handle the load variation. Although many works in literature have surveyed cloud computing and its features, there is a lack of a detailed analysis about elasticity for the cloud. As an attempt to fill this gap, we propose this survey on cloud computing elasticity based on an adaptation of a classic systematic review. We address different aspects of elasticity, such as definitions, metrics and tools for measuring, evaluation of the elasticity, and existing solutions. Finally, we present some open issues and future directions. To the best of our knowledge, this is the first study on cloud computing elasticity using a systematic review approach.
Information-centric sensor networks (ICSNs) are a paradigm of wireless sensor networks that focus on delivering information from the network based on user requirements, rather than serving as a point-to-point data communication network. Introducing learning in such networks can help to dynamically identify good data delivery paths by correlating past actions and results, make intelligent adaptations to improve the network lifetime, and also improve the quality of information delivered by the network to the user. However, there are several factors and limitations that must be considered while choosing a learning strategy. In this paper, we identify some of these factors and explore various learning techniques that have been applied to sensor networks and other applications with similar requirements in the past. We provide our recommendation on the learning strategy based on how well it complements the needs of ICSNs, while keeping in mind the cost, computation, and operational overhead limitations.
The quality of experience (QoE) of 3D contents is usually considered to be the combination of the perceived visual quality, the perceived depth quality, and lastly the visual fatigue and comfort. When either fatigue or discomfort are induced, studies tend to show that observers prefer to experience a 2D version of the contents. For this reason, providing a comfortable experience is a prerequisite for observers to actually consider the depth effect as a visualization improvement. In this paper, we propose a comprehensive review on visual fatigue and discomfort induced by the visualization of 3D stereoscopic contents, in the light of physiological and psychological processes enabling depth perception. First, we review the multitude of manifestations of visual fatigue and discomfort (near triad disorders, symptoms for discomfort), as well as means for detection and evaluation. We then discuss how, in 3D displays, ocular and cognitive conflicts with real world experience may cause fatigue and discomfort; these includes the accommodation–vergence conflict, the inadequacy between presented stimuli and observers depth of focus, and the cognitive integration of conflicting depth cues. We also discuss some limits for stereopsis that constrain our ability to perceive depth, and in particular the perception of planar and in-depth motion, the limited fusion range, and various stereopsis disorders. Finally, this paper discusses how the different aspects of fatigue and discomfort apply to 3D technologies and contents. We notably highlight the need for respecting a comfort zone and avoiding camera and rendering artifacts. We also discuss the influence of visual attention, exposure duration, and training. Conclusions provide guidance for best practices and future research.
The proxy signature schemes allow proxy signers to sign messages on behalf of an original signer, a company, or an organization. Such schemes have been suggested for use in a number of applications, particularly in distributed computing, where delegation of rights is quite common. Many identity-based proxy signature schemes using bilinear pairings have been proposed. But the relative computation cost of the pairing is approximately twenty times higher than that of the scalar multiplication over elliptic curve group. In order to save the running time and the size of the signature, in this letter, we propose an identity-based signature scheme without bilinear pairings. With the running time being saved greatly, our scheme is more practical than the previous related schemes for practical application.
With the increasing adoption of embedded smart devices and their involvement in different application fields, complexity may quickly grow, thus making vertical ad hoc solutions ineffective. Recently, the Internet of Things (IoT) and Cloud integration seems to be one of the winning solutions in order to opportunely manage the proliferation of both data and devices. In this paper, following the idea to reuse as much tooling as possible, we propose, with regards to infrastructure management, to adopt a widely used and competitive framework for Infrastructure-as-a-Service such as OpenStack. Therefore, we describe approaches and architectures so far preliminary implemented for enabling Cloud-mediated interactions with droves of sensor- and actuator-hosting nodes by presenting Stack4Things, a framework for Sensing-and-Actuation-as-a-Service (SAaaS). In particular, starting from a detailed requirement analysis, in this work, we focus on the subsystems of Stack4Things devoted to resource control and management as well as on those related to the management and collection of sensing data. Several use cases are presented justifying how our proposed framework can be viewed as a concrete step toward the complete fulfillment of the SAaaS vision.
Network traffic describes the characteristics and users’ behaviors of communication networks. It is a crucial input parameter of network management and network traffic engineering. This paper proposes a new prediction algorithm to network traffic in the large-scale communication network. First, we use signal analysis theory to transform network traffic from time domain to time-frequency domain. In the time-frequency domain, the network traffic signal is decomposed into the low-frequency and high-frequency components. Second, the gray model is exploited to model the low-frequency component of network traffic. The white Gaussian noise model is utilized to describe its high-frequency component. This is reasonable because the low-frequency and high-frequency components, respectively, represent the trend and fluctuation properties of network traffic, while the gray model and white Gaussian noise model can well capture the characteristics. Third, the prediction models of low-frequency and high-frequency components are built. The hybrid prediction algorithm is proposed to overcome the problem of network traffic prediction in the communication network. Finally, network traffic data from the real network is used to validate our approach. Simulation results indicate that our algorithm holds much lower prediction error than previous methods.
In this paper, the performance of dual-hop amplify-and-forward (AF) two-way relaying systems is considered, where the terminals and relay are interfered by a finite number of co-channel interferers. In addition, the derived expressions are evaluated in terms of outage probability and throughput in delay-limited transmission mode. To make the analysis mathematically tractable, the unique expressions of outage probability are adopted to deal with energy harvesting protocols related to time switching and power splitting coefficients and expression of the throughput is also calculated. Based on the analytic results, this paper investigates the impact of system parameters such as energy harvesting time/power fractions, number of interferers and signal-to-noise ratio (SNR) on throughput performance. Monte Carlo simulation results are presented to prove the tightness of the proposed energy harvesting two-way relaying system.
Public key encryption with keyword search is a useful primitive that provides searchable ciphertexts for some predefined keywords. It allows a user to send a trapdoor to a storage server, which enables the latter to locate all encrypted data containing the keyword(s) encoded in the trapdoor. To remove the requirement of a secure channel between the server and the receiver in identity-based encryption with keyword search, Wu et al. proposed a designated server identity-based encryption scheme with keyword search. However, our cryptanalysis indicates that Wu et al.'s scheme fails in achieving the ciphertext indistinguishability. To overcome the security weakness in the scheme and offer the multiple-keyword search function, we put forward a designated server identity-based encryption scheme with conjunctive keyword search. In the random oracle model, we formally prove that the proposed scheme satisfies the ciphertext indistinguishability, the trapdoor indistinguishability and the off-line keyword-guessing attack security. Comparison analysis shows that it is efficient and practical.
Attribute-based encryption, especially ciphertext-policy attribute-based encryption, plays an important role in the data sharing. In the process of data sharing, the secret key does not contain the specific information of users, who may share his secret key with other users for benefits without being discovered. In addition, the attribute authority can generate the secret key from any attribute set. If the secret key is abused, it is difficult to judge whether the abused private key comes from users or the attribute authority. Besides, the access control structure usually leaks sensitive information in a distributed network, and the efficiency of attribute-based encryption is a bottleneck of its applications. Fortunately, blockchain technology can guarantee the integrity and non-repudiation of data. In view of the above issues, an efficient and privacy-preserving traceable attribute-based encryption scheme is proposed. In the proposed scheme, blockchain technologies are used to guarantee both integrity and non-repudiation of data, and the ciphertext can be quickly generated by using the pre-encryption technology. Moreover, attributes are hidden in anonymous access control structures by using the attribute bloom filter. When a secret key is abused, the source of the abused secret key can be audited. Security and performance analysis show that the proposed scheme is secure and efficient.
Software-defined networking (SDN) is being widely adopted by enterprise networks, whereas providing security features in these next generation networks is a challenge. In this article, we present the main security threats in software-defined networking and we propose AuthFlow, an authentication and access control mechanism based on host credentials. The main contributions of our proposal are threefold: (i) a host authentication mechanism just above the MAC layer in an OpenFlow network, which guarantees a low overhead and ensures a fine-grained access control; (ii) a credential-based authentication to perform an access control according to the privilege level of each host, through mapping the host credentials to the set of flows that belongs to the host; (iii) a new framework for control applications, enabling software-defined network controllers to use the host identity as a new flow field to define forwarding rules. A prototype of the proposed mechanism was implemented on top of POX controller. The results show that AuthFlow denies the access of hosts either without valid credentials or with revoked authorization. Finally, we show that our scheme allows, for each host, different levels of access to network resources according to its credential.
Smart power grid is referred to as the next revolutionary innovation in electric power generation, transmission, and distribution technology. Smart grids are an example of cyber physical system (CPS) and an extremely critical infrastructure. The smart grids are expected to be more secure and must have the ability of self-healing and recovery. Smart power grids are also one of the major targets for different kinds of cyber attacks, as they are now an open system network, according to the model architecture of smart power grid. This paper presents a comprehensive survey on understanding the smart power grid, its important components, different cyber security and other kinds of issues, existing methodologies and approaches for communication protocols, and architecture of smart power grids. We conclude our paper by discussing various research challenges that still exist in the literature, which provides a better understanding of the problem, the current solution space, and future research directions to defend smart power against different cyber attacks.