Abstract: The explosive growth of data volume and the ever-increasing demands of data value extraction have driven us into the era of big data. The "5V" (Variety, Velocity, Volume, Value, and Veracity) characteristics of big data pose great challenges to traditional computing paradigms and motivate the emergence of new solutions. Cloud computing is one of the representative technologies that can perform massive-scale and complex data computing by taking advantages of virtualized resources, parallel processing and data service integration with scalable data storage. However, as we are also experiencing the revolution of Internet-of-things (IoT), the limitations of cloud computing on supporting lightweight end devices significantly impede the flourish of cloud computing at the intersection of big data and IoT era. It also promotes the urgency of proposing new computing paradigms. We provide an overview on the topic of big data, and a comprehensive survey on how cloud computing as well as its related technologies can address the challenges arisen by big data. Then, we analyze the disadvantages of cloud computing when big data encounters IoT, and introduce two promising computing paradigms, including fog computing and transparent computing, to support the big data services of IoT. Finally, some open challenges and future directions are summarized to foster continued research efforts into this evolving field of study.
Abstract: A cascaded co-evolutionary model for Attribute reduction and classification based on Coordinating architecture with bidirectional elitist optimization (ARC-CABEO) is proposed for the more practical applications. The regrouping and merging coordinating strategy of ordinary-elitist-role-based population is introduced to represent a more holistic cooperative co-evolutionary framework of different populations for attribute reduction. The master-slave-elitist-based subpopulations are constructed to coordinate the behaviors of different elitists, and meanwhile the elitist optimization vector with the strongest balancing between exploration and exploitation is selected out to expedite the bidirectional attribute co-evolutionary reduction process. In addition, two coupled coordinating architectures and the elitist optimization vector are tightly cascaded to perform the co-evolutionary classification of reduction subsets. Hence the preferring classification optimization goal can be achieved better. Some experimental results verify that the proposed ARC-CABEO model has the better feasibility and more superior classification accuracy on different UCI datasets, compared with representative algorithms.
Abstract: Similar time series searching plays an important role in applications such as time series classification and outlier detection. We observe that different segment of a time series may have different significance, thus propose to assign different weight to each segment, and extract those segments with highest weights for distance computation. Since these segments are more representative, we can achieve high accuracy of similarity search with much lower computation overhead. The result of experiments on both real world and synthetic data sets demonstrates that we can achieve comparable or even higher accuracy while largely reduce the computation overhead, if we use only those important segments rather than the whole time series while performing similarity search.
Abstract: We proposed an efficient scheme for implementing the large-scale one-way Quantum computing (QC) with a novel hybrid solid-state quantum system. This system consists of N Nitrogen-vacancy (N-V) centers coupled to N separate Transmission line resonators (TLRs), which are interconnected by a Current-biased Josephson junction (CBJJ) superconducting phase qubit. We showed the way of preparation of N-qubit linear cluster state with N N-V centers, then we demonstrated the way of extending cluster state by connecting two pieces of linear cluster states into two-dimensional cluster state, last, with our designed new structures, we demonstrated the QC basic operations. It means that our scheme may open up promising possibilities for implementing the practical and scalable one-way quantum computers with the hybrid solid-state quantum system. We discussed the experimental feasibility of our system.
Abstract: A related-key impossible differential attack on 24-round LBlock is constructed by using new 16-round related-key impossible differentials and adding 4 rounds at the top and 4 rounds at the bottom of these 16-round related-key impossible differential paths. The data and time complexities are about 263 chosen plaintexts and 275.42 24-round encryptions respectively.
Abstract: Energy-saving is extraordinary important for real-time systems. Dynamic voltage scaling (DVS) is an important technique to reduce the energy consumption of processors that support voltage scaling. It has been exploited extensively in task scheduling. However, many approaches take simple treatments and some of them even neglect the large voltage transition overheads. Although some strategies consider the penalty, frequency switching produces extra large time overhead and deadline misses occur frequently. In our paper, we propose an energy efficient soft real-time dynamic program scheme by using quantitative switching overhead and communication penalty for multitask scheduling with uncertain execution time. The experiments show that our approaches significantly outperform existing solutions both on simple-core and multi-core systems in terms of energy-saving.
Abstract: The first-principle is an effective method for the material property calculation. One inversion design idea of dielectric material based on the first-principle was put forward. In this kind of material design thought, the specific demanded dielectric property of material is the starting point of the whole design course. The atomic structure characteristic of the material is solved using the inversion model based on the first-principle. And the key issues needing researching were discussed, these problems were quantized structure of electromagnetic and optic field, the quantized polarization theory of dielectric material, the uniform calculation method of dielectric property and the dielectric material inversion model.
Abstract: Sweeney, Robertson and Tocher (SRT) algorithm is a common and efficient way for division and square root (div/sqrt). We present to overlap two iterations into one cycle by predicting remainder and quotient. To reduce latency, redundant representation is used superiorly, as well as the use of a minimum redundancy factor. Division and square root can be integrated into one unit which causes a reduction in hardware cost. With 40nm technology library, the area of our architecture after layout design, is 37795μm2, the power is 81.19mW and the delay is only 656ps. The cycles for double-precision division and square root are 17 and 16, respectively. Experiments show our architecture achieves small latency and high frequency, together with modest area and power.
Abstract: Translation model containing translation rules with probabilities plays a crucial role in statistical machine translation. Conventional method estimates translation probabilities with only the consideration of co-occurrence frequencies of bilingual translation units, while ignoring document-level context information. In this paper, we extend the conventional translation model to a topic-triggered one. Specifically, we estimate topic-specific translation probabilities of translation rules by leveraging topical context information, and online score selected translation rules according to topic posterior distributions of translated sentences. As compared with the conventional model, our model allows for more fine-grained distinction among different translations. Experiment results on large data set demonstrate the effectiveness of our model.
Abstract: Inspired by the behavior of cockroaches in nature, this paper presents a new optimization algorithm called Cockroach colony optimization (CCO). In the CCO algorithm, nests of cockroaches are placed at the "corner" of the search space. The current best solution to the optimization problem called food can split some of the search targets by applying the logistic multi-peak map and the margin control strategies. By using a particular search scheme, the individual cockroaches can accomplish a highly efficient global and local search in each crawling process from a nest to a search target. The paper provides a formal convergence proof for the CCO algorithm. Experiment results show that the CCO algorithm can be applied to solve global numerical optimization problems with the characteristics of quick convergence and high precision.
Abstract: Business process that facilitates the organization cooperation and resource sharing plays an important role in Community cloud (Comc). The nature of applications and decentralized infrastructure in Comc make it a big challenge in ensuring the execution efficiency of business process, especially when the business requests are in high concurrency. We present a two-stage service replica strategy to improve the execution efficiency of business process by shortening the response time of request to single service and reducing interaction time among distributed services. We initially utilize the social network property of services to pre-allocate the replicas of key services. After that, queueing model is adopted to determine the required additional replicas according to the quantity and frequency of user requests, then schedule service replicas and requests dynamically. Experiments demonstrate the effectiveness of the proposed strategy.
Abstract: Cloud storage auditing is considered as a significant service to verify the integrity of the data stored in cloud. Liu et al. have proposed a proof of storage protocol with public auditing form lattice assumption, which can resist quantum computer attacks. They claim that the protocol enjoys desirable security properties, such as unforgeability and privacy preserving. We demonstrate that any malicious cloud service provider can cheat the third party auditor and the users, through generating the valid response proof which can pass the verification even if some other data blocks are lost by accident. And the primitive data blocks maybe recovered by any curious third party auditor through solving some linearly equations. Our work can help cryptographers and engineers design and implement more secure and efficient lattice-based public auditing scheme for the cloud storage data.
Abstract: The threshold implementation method of Substitution box (S-box) has been proposed by Nikova et al. for resisting first-order Differential power attacks with glitches. To lower the time complexity for a threshold implementation of a specific non-linear function, one needs to decompose the function first and then search possible share methods for it. However, the time complexity for this search process is still non-trivial. In this paper, an effective method of searching threshold implementations of 4-bit S-boxes is proposed. It mainly consists of two stages. For the decomposing stage, an efficient way of decomposing an S-box is introduced. For the sharing stage, the search complexity is lowered by the technique of time memory trade-off. As a result, threshold implementations of various lightweight block ciphers' S-boxes are given. Moreover, our method is applied to each 4-bit involutive S-box and some candidates of threshold implementations are presented.
Abstract: A voltage reference with low Temperature coefficient (TC) and three outputs, which is compatible with high Power supply rejection ratio (PSRR) and low power consumption, is presented in this paper. The proposed reference circuit operating with all transistors biased in subthreshold region, provides three reference voltages of 340mV, 680mV, and 1020mV. Subthreshold MOSFET design allows the circuit to work with minimum current consumption of 7.4nA at the supply voltage 1.2V. The mean line sensitivity is 1.7%/V under a supply voltages ranging from 1.2 to 3V. The Power supply rejection ratio (PSRR) of 340mV output voltage simulated at 100Hz and 10MHz is over than 51.9dB and 120.4dB, respectively. Monte Carlo simulation shows a mean TC is 3.9ppm/℃ with a standard deviation of 1ppm/℃ over a set of 500 samples, in a temperature range from -30℃ to 100℃. The active area of the presented voltage reference is 0.003mm2.
Abstract: With the development of social networks, many recommendation systems recommend items to a group of users, known as group recommendation. However, it will not be an appropriate recommendation without user topical influence analysis. We proposed a new group recommendation based on user topical influence analysis. We firstly construct several topical sub-groups depending on topics. Then we analyze user topical influence in sub-group, including user influence on specific topic and the topical sub-group. Besides, four user-factors are introduced to calculate the user social influence on topical sub-groups more accurately. Based on user topical influence analysis, we present our topical group recommendation algorithm, which calculates the predicted rating value for sub-group by aggregating weighted ratings of all users in the sub-group. The experimental results show convincingly that our proposed method can improve the group recommendation quality.
Abstract: This paper proposes a dynamic fuzzy partition method of attribute domain suitable for the scheduling problem of Semiconductor wafer fabrication (SWF). Then, based on the above partition method, this paper gives a new Fuzzy association classification rules (FACRs) for scheduling SWF. Also, this paper presents a corresponding simple mining method used to obtain the effective FACRs based on the Apriori algorithm. Furthermore, a Harmony search (HS) algorithm is designed to determine the rule parameters including the minimum fuzzy support and the total number of linguistic values of each condition attribute for the simple fuzzy partition. At last, computational simulations and comparisons based on the practical data are provided. It is shown that the proposed FACRs can generate better results for almost all problem instances.
Abstract: The objective of traditional feature studies in Spoken language recognition (SLR) is extracting the linguistic discrimination between each language. However, applications of security area always interested in a particular language, which requires the features should be the best reflection of the differences between target language and the other languages. To address this problems, the frame level Phone log-posteriors feature (PLF), which has been recently introduced as a novel and effective feature in SLR, is optimized to get a better performance on Target language detection (TLD) task. The F-Ratio analysis method is used to analyze the contribution of each dimension in feature vector for TLD. In this work, frame level phone posterior probabilities are estimated by a phone recognizer, and processed through taking logarithm. Then the feature is optimized through weighting each dimension according to the F-Ratio values. Finally, Principal component analysis (PCA) is used to decorrelate the feature and reduce vector size. Experiments carried out on the NIST LRE 2007 dataset show that the effectiveness of the optimized feature, which yields significant relative improvements in term of Equal error rate (EER) with regard to the Gaussian mixture models-Support vector machines (GMM-SVM) system based on the original feature.
Abstract: The frequency of the input signal is selected specially to ensure that the input frequency just exceeds the cut-off frequency of analog filters. Because input signal is restrained partly by the attenuation of analog filters, output signal is sensitive to the variation of fault components. The peak voltage of key points is measured to construct fault samples of analog filters. The relationship between fault components and fault samples is analyzed to verify that fault samples have strong fault information of analog filters. The experimental results show that high-quality fault samples can be acquired efficiently, and fault feature of samples can be verified objectively by the present method.
Abstract: In most embedded microprocessor based System on chips (SoCs), cache has become a major source of power consumption due to its increasing size and high access rate. Power optimization of cache based on Compare-based adaptive clock gating (CACG) is proposed to reduce the power waste due to cache idle. By detecting the cache's working state, the CACG can automatically turn off its clock when it is in idle state, saving a large percentage of dynamic power. Measurements of a real SoC chip fabricated under TSMC 65nm CMOS process show that an average of 30.3% power reduction is gained in Dhrystone test benchmark at a cost of negligible area overhead and no virtually performance loss.
Abstract: A fast and efficient hardware implementation for computing the Singular value decomposition (SVD) and Eigenvalue decomposition (EVD) is presented. Considering that the SVD and EVD are complex and expensive operations, to achieve high performance with low computing complexity, our approach takes full advantage of the combination of parallel and sequential computation, which can increase efficiently the hardware utilization. Besides, regarding to EVD, we propose a hardware solution of a simplified Coordinate rotation digital computer (CORDIC)-like algorithm which can obtain higher speed. The performance analysis and comparison results show that the proposed methods can be realized on Filed-programmable gate arrays (FPGAs) with less computation time by using systolic array. It will be shown that the proposed implementation could be an efficient alternative for real-time applications.
Abstract: Compressed sensing (CS) has been applied widely in Wireless sensor networks (WSNs) recently. An optimal Compressed data gathering (CDG) framework for energy efficient WSNs is proposed here. A novel Measurement matrix optimization algorithm (MMOA) is proposed for compressed data measurement in WSNs. Diffusion wavelet transform matrix (DWTM) is chosen for sparse representation of the compressed data. An Optimal data aggregation tree (ODAT) algorithm is presented based on CS and routing technology. MMOA is to reduce the data transmissions under the same data reconstruction ratio. DWTM is to make the original data become more sparse and to increase the compressed data reconstruction ratio. The main purpose of ODAT is to minimize the energy consumption of the whole WSNs through the CDG technology and the optimal route. We validate the efficiency of the proposed CDG framework based on MMOA, DWTM and ODAT through extensive experiments.
Abstract: This paper presents an online unsupervised learning classification of pedestrians and vehicles for video surveillance. Different from traditional methods depending on offline training, our method adopts the online label strategy based on temporal and morphological features, which saves time and labor to a large extent. It extract the moving objects with their features from the original video. An online filtering procedure is adopted to label the moving objects according to certain threshold of speed and area feature. The labeled objects are sent into a SVM classifier to generate the pedestrian & vehicle classifier. Experimental results illustrate that our unsupervised learning algorithm is adapted to polymorphism of the pedestrians and diversity of the vehicles with high classification accuracy.
Abstract: Carcinoembryonic antigen (CEA) is one of the most widely used tumor markers worldwide. A portable system was designed for rapid field specific test of CEA based on immunomagnetic separation and chemilu-minescence immunoassay, which consisted of chemiluminescent immunosensor and optical test instrument. The chemiluminescent immunosensor was based on immunomagnetic separation and chemiluminescence immunoassay, which utilized a immuno-sandwich scheme as-say with Horseradish peroxidase (HRP) labeled anti-CEA antibody and Immunomagnetic beads (IMBs). And the test instrument consisted of optical detection module and central processing and control system. The optical noise of this instrument was below 40 photons/s. The system could achieve a sample test in 30 minutes, and the detecting result showed a good liner relationship between the intensity of luminescence and concentration of CEA in the range from 1 to 1000ng/mL, with a correlation coefficient of 0.9858. Real human samples were also detected with this system in comparison with clinical results and a correlation coefficient of 0.9934 was obtained. The instrument is portable and easy to operate, which will provide potential application in cancer clinical test.
Abstract: To reduce the computational complexity, decrease hardware resource consumption, and make it practicable, we propose a novel coordinate transformation based low-complexity Least squares (LS) algorithm. The positive integral variable is set as a reference coordinate and coefficients are transformed to be constant. The fitting polynomial in the reference coordinate system can be easily realized with lower complexity. To achieve the fitting curve in the original coordinate system, coordinate transformation is needed. Compared with the conventional LS fitting, our implementation results show that the proposed LS fitting algorithm can greatly reduce hardware resources in the premise of meeting the precision requirements.
Abstract: In the three-dimensional microscopic biological image restoration processing of digital confocal microscopy, the selection of three-Dimensional Point spread function (3D-PSF) space size determines the restoration effect and restoration time. Based on 3D-PSF double funnel-shaped structure and analyzing the relationship of 3D-PSF space size to the image restoration effect and restoration time, it proposed the comprehensive image restoration evaluation criteria based on restoration efficiency, and also proposed the 3D-PSF selection method based on restoration efficiency curve inflection point. Three groups of 3D-PSF with different layer distances were used for image restoration experiments, the relationship between the 3D-PSF space sizes of different diameter and the image restoration effect and restoration time was built, and the restoration efficiency curve was obtained. Based on the curve inflection point, the selection of optional minimum space 3D-PSF and 3D-PSF was determined. Experimental results showed the method has good feasibility.
Abstract: An increasing number of heterogeneous networks are connected with each other in the cloud. Both cloud controller platforms and cloud service providers have developed rapidly networking support. In most of the cloud networking service, users have to configure a variety of network-layer devices such as switches, subnets in heterogeneous networks. We propose a service-level network model which provides higher-level connectivity and policy abstractions, based on the Software defined networking (SDN) technology to closely integrate applications in the cloud with the heterogeneous networks through programmable interfaces and automatic operations. We describe the architecture of Hetersdn, an SDN controller platform that supports a service-level model for application networking in heterogeneous networks in clouds.
Abstract: The detection problem for the Multiple-access Spatial modulations (M-SM) is investigated in this paper, where multiple transmitters adopting spatial modulation communicate to the receiver at the same time. The optimal Maximum-likelihood (ML) detection suffers from the high computational complexity, while Sphere decoding (SD) can not reduce the complexity effectively because of the multiple active antennas in M-SM. In order to avoid the high complexity, a Space-alternating generalized expectation-maximization (SAGE) algorithm aided List-projection (S-LP) detector is proposed and applied to M-SM systems. The received signal vector is firstly projected onto the subspaces spanned by the columns of channel matrices corresponding to the possible active antennas. Then some combinations of antenna indices with the largest projections are selected as candidate index sets, based on which, a modified SAGE algorithm is applied to update the candidate symbols. Both analysis and simulation results show that the proposed S-LP detector achieves a near-optimum performance with a significantly reduced complexity compared with ML and SD detection.
Abstract: The Non-line-of-sight (NLOS) error is the key issue for mobile user location in cellular wireless communication system. In this paper, we propose a robust and accurate approach using biased Kalman filter based on third-order cumulant to mitigate the NLOS error. The simulation results indicate that with less prior information about communication environments, the approach we present has a better location performance even in severe NLOS situations.
Abstract: Channel reciprocity is a vital component for the key generation from multipath channels. In this paper, we introduce the effect of delay between measures on the channel reciprocity, build an Autoregression (AR) model to approximate time-varying channel measurements and apply a linear prediction method to reduce the measurement errors and to enhance the dependency between measurements of two communication parties. In the following bit conversion, we propose a novel quantization scheme named the Probability distance method (PDM) and compare its performance with other existing methods theoretically. The simulation results show that the optimal solution to the linear prediction problem can effectively increase the channel reciprocity in the assumed channel models. The proposed PDM can achieve a lower disagreement rate with acceptable costs.
Abstract: The problem of consistency maintenance in replication is a fundamental issue in cloud storage. Existing solutions either cannot obtain good efficiency, or suffer low reliability. We propose a novel consistency maintenance strategy based on diamond topology. It organizes all the nodes in cloud storage system into a high symmetrical, reliable structure. The experimental results show that our diamond topology reduces the network overhead by 55.9% at least, compared to state-of-the-art random topology, and achieves at most 49.1% enhancement in reliability towards to tree topology.
Abstract: This paper deals with an efficient method for channel estimation in Orthogonal frequency-division multiplexing (OFDM) systems. The channels are assumed to be Time-varying (TV) and approximated by a Second-order polynomial (SOP) model. Increasing the polynomial order reduces the model error, but requires more polynomial estimation time. In order to circumvent the problem, we construct two SOP models by using a repeated-pattern preamble and pilots. As the channel information can be obtained from the second SOP model which polynomial estimation period is shorter than that of general SOP model, the proposed scheme is superior to the existing methods. Simulation results are presented to illustrate the superiority of our proposed approach.
Abstract: There are more unknowns than equations to solve for previous four-component decomposition methods. So they have to determine each scattering power with some assumptions and avoid negative powers in decomposed results with physical power constraints. This paper presents a multi-component decomposition for multi-look Polarimetric SAR (PolSAR) data by combining the Generalized similarity parameter (GSP) and the eigenvalue decomposition. It extends the existing four-component decomposition by adding the diffuse scattering as the fifth scattering component considering additional cross-polarized power that could represent terrain effects and rough surface scattering. And unlike the previous methods, the new method determines the volume scattering contribution by a modified nonnegative eigenvalue decomposition method and utilizes the GSP to determine the negative powers of the three scattering contributions (i.e., odd-bounce, double-bounce, and diffuse scattering) directly without extra assumptions and constraints. By experiment, the new method is proved to be more straightforward and reasonable.
Abstract: In case of bright targets over a dark background, the cross-correlation noise of Multiple input and multiple output (MIMO) Synthetic aperture radar (SAR) orthogonal waveforms in the same frequency coverage will increase which can degrade the image. An Inter-pulse Costas hopping and intra-pulse Slope coded linearly frequency modulated (IPC-SCLFM) waveform is designed and analyzed to decrease the autocorrelation and cross-correlation noise. Besides a novel acquisition mode of "MIMO Coprime SAR" (MIMO-Cop SAR) is proposed with Coprime arrays (Co-arrays) in receive across the SAR line of flight based on digital beamforming. An improved processing by multiplying values at each pixel of two co-arrays images is proposed to suppress the cross-correlation noise. MIMO-Cop SAR can effectively suppress the cross-correlation noise, reduce the amount of data to be stored and get high signal-to-noise ratio in a simple way. The effectiveness of MIMO-Cop SAR for maritime surveillance is demonstrated by the simulation experiments.
Abstract: Wideband traveling wave PIN Diode switches and attenuators are proposed. The switches and attenuators consist of three shunt PIN diodes, six radial stubs, bias circuit and DC return circuit. Wide bandwidth is achieved with six different radial stubs as reflective load of the diodes. The stubs work as short circuit to the diodes at different frequency points for broad bandwidth. For the Single pole single throw (SPST) switch, the lowest measured Insertion loss (IL) is 0.75dB and lower than 2.0dB in 85-105GHz, and its isolation is better than 21.3dB. As a Voltage-controlled variable attenuator (VCA), typical attenuation range is 23dB and attenuation fluctuation is lower than 5dB in 85-105GHz.