Abstract: Monotone span program(MSP) and Linear code(LC) are efficient tools to construct Linear secret sharing scheme (LSSS) for a given access structure. Since the size of an MSP or the length of an LC corresponds to the communicational complexity of an LSSS, one main motivation to study MSPs or LCs is the lower bound for their sizes or lengths. Therefore, it is one of the most important open problems how to efficiently construct an MSP or LC for a given access structure Γ with the smallest sizes or shortest length. Our contributions are: We extend TANG et al.'s result, showing that, for any given access structure Γ, there exists an MSP or an LC to realize Γ if and only if a system of quadratic equations has solutions; We utilize the relationship between LCs and MSPs to obtain the greatest lower bound on the row size and the column size of MSPs realizing a given Γ, as well as an upper bound on the column size of MSPs; We give an algorithm to construct an MSP with the smallest sizes.
Abstract: We propose a new efficient algorithm named Cuckoo search fault diagnosis (CSFD) to solve system-level fault diagnosis problem. KMP algorithm is proposed for initialization based on the K-means partition algorithm; a fitness function is designed according to the equation constraints satisfied by the test model; the binary mapping method is advanced by optimizing existing binary mapping algorithm. Experiments show that KMP algorithm significantly reduces the disparity between the initial solution and the actual solution, and CSFD algorithm improves the efficiency and correctness significantly compared with existing typical swarm intelligence diagnosis algorithm.
Abstract: A novel robust audio watermarking algorithm based on empirical mode decomposition is proposed. The intrinsic feature of the final residual decomposed from the audio frame is selected to embed watermark and the algorithm works by shifting each element of the final residual to make its sum greater or less than 0. The experimental results show that the proposed algorithm does not change the property of the final residual after embedding watermark and the watermark is robust against various kinds of attacks. Compared with existing classic algorithms based on Empirical mode decomposition (EMD), the proposed algorithm largely improves both the robustness and imperceptibility of watermarking.
Abstract: We propose an approach called Bug report assignment with topic modeling and heterogeneous network analysis (BAHA) to automatically assign bug reports to developers. Existing studies adopt social network analysis to characterize the collaboration of developers. The networks used in these studies are all homogenous. In real practice of bug resolution, different developers collaborate on different bug reports that makes the homogenous network unable to capture this information. We use heterogeneous network to describe the relations between reporters, bug reports and developers to characterize developers' collaboration. Experiments on Eclipse JDT project show that BAHA outperforms the state of art methods on automatic bug report assignment.
Abstract: We present two fault injection attacks against the IC-Printing block cipher (PRINTcipher). The basic idea of our attack is to notice the property that by using some couples of input difference and output difference, the attacker can determine the permutation control key. To recover the permutation control key, one needs to inject at least 4 faults. It is needed at least 15 faults to reveal the whole key.
Abstract: In this work, a Storm-based query language System (SQLS) is proposed for real-time data stream analysis. The system is compatible with Continuous query language (CQL) specification. It supports both continuous queries and one-time queries over streaming data, and meets the requirements of user experience (traditional SQL queries) and QoS (such as real-time and throughput). In order to better meet the requirement of throughput and enhance the processing efficiency, the load shedding algorithm and cache optimization are employed during the implementation of SQL-like operators. Finally, performance testing of the proposed SQLS has been conducted on stand-alone Storm platform and Storm clusters. Experimental results show that our system can not only meet the needs of users, but also extend the function of real-time streaming queries processing.
Abstract: This paper addresses face recognition problem in a more challenging scenario where the training and test samples are both subject to the visual variations of poses, expressions and misalignments. We employ dense Scale-invariant feature transform (SIFT) feature matching as a generic transformation to roughly align training samples; and then identify input facial images via an improved sparse representation model based on the aligned training samples. Compared with previous methods, the extensive experimental results demonstrate the effectiveness of our method for the task of face recognition on three benchmark datasets.
Abstract: In case of Quadrature amplitude modulated (QAM) transmission, the residual carrier affects the quality of the received signal. We show that the carrier tracking problem is equivalent to a time-varying Blind source separation (BSS) problem if there exists a residual carrier. This motivates us to propose a new receiver architecture, that is tolerant to residual carrier, based on a Recursive least squares (RLS) BSS algorithm. Simulations are provided to show the promising performance of the proposed algorithm for tracking the residual carrier and for improving the quality of the received signal.
Abstract: This paper presents a general Bayesian model for speaker verification tasks. It is a generative probability model. Due to its simple analytical property, a computationally efficient expectation-maximization algorithm can be derived to obtain the model parameters. A closed-form solution, which allows the scalable size of enrollment set, is given in a full Bayesian way for making speaker verification decisions. Factor analysis technique is employed to model the speaker-specific components, then the redundant information in this model will be dropped. Experimental results are evaluated by both equal error rate and minimum detection cost function. The proposed approach shows promising results on the National institute of standards and technology (NIST) Speaker recognition evaluation (SRE) 2010 extended and 2012 core tasks. Significant improvement is obtained when comparing with Gaussian probabilistic linear discriminant analysis, especially under phone-call conditions and mismatched train-test channel conditions. Contrast experimental results with other popular generative probability models are also presented in this paper.
Abstract: The expandability of high demands for multimedia applications brings out more and more video standards for improving the coding and compression efficiency. As the most commonly used transform, Discrete cosine transform (DCT) achieves excellent energy compaction property and good compression efficiency. Hardware sharing is the mostly used efficient strategy to reduce the cost for video codec. Based on traditional matrix factorization, this paper makes three observations to direct the design of proposed hardware sharing architecture. The proposed architecture can be generally used to compute 8×8 DCT of AVS, H.264, VC-1 and HEVC in a low cost way, and can be used to decode Full-HD and WQXGA formate video sequences in real time. The design has been synthesized in 0.13 μm technology. The synthesis results show that the proposed architecture achieves 76.9% reduction in gate count, 85.6% decrease in power consumption and 35% improvement in operational speed in comparison with other existing designs.
Abstract: The built-in Electro-Static discharge (ESD) protection circuits for Radio frequency identification (RFID) tag ICs are proposed. The ESD protection function is built into the rectifier and amplitude limiter. The rectifier and limiter are connected directly to the RF interface, and some transistors can discharge the larger current. These transistors can be used to build ESD protection circuits, through the redesign and optimization. The built-in ESD protection circuits can improve the ESD protection level and reduce the layout area. The circuits have been fabricated in 0.18μm CMOS process. The test results show that the built-in ESD protection circuits work well under 4kV ESD pressure and save as much as 72% of the layout area compare with foundry standard ESD protection cells.
Abstract: Fast Fourier transform (FFT) accelerator and Coordinate rotation digital computer (CORDIC) algorithm play important roles in signal processing. We propose a configurable floating-point FFT accelerator based on CORDIC rotation, in which twiddle direction prediction is presented to reduce hardware cost and twiddle angles are generated in real time to save memory. To finish CORDIC rotation efficiently, a novel approach in which segmented-parallel iteration and compress iteration based on CSA are presented and redundant CORDIC is used to reduce the latency of each iteration. To prove the efficiency of our FFT accelerator, four FFT accelerators are prototyped into a FPGA chip to perform a batch-FFT. Experimental results show that our structure, which is composed of four butterfly units and finishes FFT with the size ranging from 64 to 8192 points, occupies 33230(3%) REGs and 143006(30%) LUTs. The clock frequency can reach 122MHz. The resources of double-precision FFT is only about 2.5 times of single-precision while the theoretical value is 4. What's more, only 13331 cycles are required to implement 8192-points double-precision FFT with four butterfly units in parallel.
Abstract: Reviews of electronic product are important resources for manufacturers to gather feedbacks from customers. They provide an important reference for customers to make a decision on product purchase. Electronic product reviews with high quality is valuable, and reviews analysis is becoming an open issue. Based on the binary feature function, a maximum entropy classification of electronic product reviews is proposed to identify the valuable reviews. Considering the simpleness of the theme in reviews, a text classifier integrated with the features of descriptive anaphor is provided to enhance the model's fitting degree. Furthermore, the feature library is constructed by the Term frequency-inverse document frequency (TF-IDF) algorithm to locate the description, and the descriptive anaphors are identified by means of the minimum Inflectional phrase (IP) parsing tree. The experiment shows the proposed classifier has the relatively high stability and precision, which makes the reviews much more usable for manufacturers to make product strategy and for customers to make a decision.
Abstract: Particle swarm optimization (PSO) has shown a good performance on solving global optimization problems. Traditional PSO has two main drawbacks of premature convergence and low convergence speed, especially on complex problems. This paper presents a new approach called Adaptive multi-layer particle swarm optimization with neighborhood search (AMPSONS), where the traditional PSO is improved by employing an adaptive multi-layer search and neighborhood search strategy to achieve a trade-off between exploitation and exploration abilities. In order to evaluate the performance of the proposed AMPSONS algorithm, the performance of AMPSONS is compared with five other PSO family algorithms, namely, CLPSO, DNLPSO, DNSPSO, global MLPSO and local MLPSO on a set of benchmark functions. The comparison results show that AMPSONS has a promising performance on majority of the test functions.
Abstract: We present a semi-supervised approach for software defect prediction. The proposed method is designed to address the special problematic characteristics of software defect datasets, namely, lack of labeled samples and class-imbalanced data. To alleviate these problems, the proposed method features the following components. Being a semi-supervised approach, it exploits the wealth of unlabeled samples in software systems by evaluating the confidence probability of the predicted labels, for each unlabeled sample. And we propose to jointly optimize the classifier parameters and the dictionary by a task-driven formulation, to ensure that the learned features (sparse code) are optimal for the trained classifier. Finally, during the dictionary learning process we take the different misclassification costs into consideration to improve the prediction performance. Experimental results demonstrate that our method outperforms several representative state-of-the-art defect prediction methods.
Abstract: Heavy ion radiation experiments have been done to DC/DC converters with different topological structures for space applications. The test results were analyzed about the function failure of three topological structures caused by single event effects. The relationship between the function failure and the input supply voltage, the output load current and the topological structure of the module were discussed. Based on the analysis of the variation relationship among the source/drain terminal voltage of MOSFETs and the input voltage and the output load, the sensitivity factors associated with the function failure caused by single event effects were discussed. A new analysis on single event function failure of DC/DC converter based on different topologies has been presented, which can be applied to radiation hardened design and space application.
Abstract: Query result caching is a crucial technique employed in search engines, reducing the response time and load of the search engines. As search engines continuously update their indexes, the query results in long-lived cache entries may become stale. It is important to provide the refresh mechanism to enhance the degree of freshness of cached results. We present a pre-judgment approach to improve the freshness of the result cache and design an incomplete allocation algorithm. We introduce the query-Time-to-live (TTL) and term-TTL structure to pre-judge the result cache. The query-TTL is used to pre-check the likelihood of a cache hit and term-TTL is applied to maintain all terms of the latest posting list. For the cache structure, we design a Queue-Hash structure and develop the corresponding incomplete allocation algorithm. The preliminary results demonstrate that our approaches can improve the freshness of cached results and decrease processing overhead compared with no pre-judgment approaches.
Abstract: Granular computing (GrC) is an emerging computing paradigm, and it is an umbrella term exploring multilevel granularity. we present a generic abstract mathematical model of the granular system. Supposing the inter-granule structure as an algebra, we propose the algebraic quotient space model. In this model, the granulation is based on a congruence relation and all the congruence relations on a granular system form a complete semi-order lattice, which is the theoretical basis for transformation, composition and decomposition among different granularities. The different granulation rules between the topological quotient space model and the algebraic quotient space model lead to the dissimilarity while composing granularities. A real-world case study is presented that demonstrates how the algebraic quotient space model works in the network transmission by error-correcting code. These work shows that the granular system model and the algebraic quotient space model are powerful conceptual modeling and functional specification methodologies for GrC.
Abstract: Images captured in foggy or hazy weather conditions often suffer from poor visibility. The dark channel prior method has well solved the single image dehazing problem in nature, but it is invalid when the scene objects are inherently similar to the atmospheric light and no shadow is cast on them. We propose an efficient regularization method by adding a scene radiance constraint and combing the dark channel prior to remove hazes from a single input image. The experiments show that this improved algorithm can deal with various levels of foggy weather conditions, as well as greatly enhance the image's visibility and details. In addition, the recovered haze-free image has little or no halo artifacts.
Abstract: The identity vector (i-vector) approach has been the state-of-the-art for text-independent speaker recognition, both identification and verification in recent years. An i-vector is a low-dimensional vector in the so-called total variability space represented with a thin and tall rectangular matrix. This paper introduces a novel algorithm to improve the computational and memory requirements for the application. In our method, the series of symmetric matrices can be represented by diagonal expression, sharing the same dictionary, which to some extent is analogous to eigen decomposition, and we name this algorithm Eigen decomposition like factorization (EDLF). Similar algorithms are listed for comparison, in the same condition, our method shows no disadvantages in identification accuracy.
Abstract: Software-defined networks (SDN) maintain a global view of the network, thus improving the intelligence of forwarding decisions. With the expansion of the network scale, distributed controllers are used in a variety of large-scale networks in which subnetworks managed by controller instance are called autonomous domains. We analyze statistic frequency of communication across the autonomous domain. We calculate the autonomous domain correlations for controller instances using acquired statistical information. We cache network views to highly correlated controller instances. Distributed controllers are capable of considering both the average response time and overall storage. An experiment shows that our method can fully take advantage of these two performance indicators.
Abstract: With the increasing number of users, enterprise networks have become more and more important, but it also faced new challenges in various aspects. Though OpenFlow, the promising de facto Software defined networking (SDN) scheme which can provide fine-grained and flow-level control for enterprise networks, yet it still has a few undesirable designs in credibility and scalability. Inspired by OpenFlow-included many excellent studies, we have proposed a reliable and scalable architecture——SuperFlow for large-scale enterprise networks in this paper. It inherits the merits of OpenFlow, and overcomes OpenFlow’s limitations by introducing some novel features. The prototype experiment has proved that SuperFlow possesses these features with desirable performance.
Abstract: Traditional anomaly detection algorithm has improved to some degree the mechanism of negative selection. There still remain many problems such as the randomness of detector generation, incompleteness of self-set and the generalization ability of detectors, which would cause a lot of loopholes in non-self space. A heuristic algorithm based on the second distribution of real value detectors for the remains of loopholes of the non-self space in the first distribution and the mutation regions of self space is proposed. The algorithm can distribute real value detectors through omission data based on the methods of partition and movement. A method is proposed to solve the problem on how to get the optimal solutions to the parameters related in the algorithm. Theoretical analysis and experimental results prove the universality and effectiveness of the method. It is found that our algorithm can effectively avoid the generation of loopholes and thus reduce the omission rate of detector sets.
Abstract: A game theory based scheduling method for household electricity consumption in smart grid is proposed in this paper. A non-cooperative game model is employed during the scheduling. All household consumers compete with each other and control their loads to maximize their payoffs. For solving the scheduling problem, we formulate the Utility optimization (UO) model, in which the electricity cost reduction and the improvement of consumers' comfort and preference are considered simultaneously. The System optimization (SO) model only minimizes the electricity cost when scheduling the consumers' electricity consumption. Then we compare and analyze the two models in numerical simulation. The existence and the uniqueness of Nash equilibrium for proposed game model are proved. Simulation results show that the UO model provides an effective scheduling approach to achieve higher comfort and preference and at the same time decrease the energy cost.
Abstract: The prediction and key factors identification for lot Cycle time (CT) and Equipment utilization (EU) which remain the Key performance indicators (KPI) are vital for multi-objective optimization in semiconductor manufacturing industry. This paper proposes a prediction methodology which predicts CT and EU simultaneously and identifies their key factors. Bayesian neural network (BNN) is used to establish the simultaneous prediction model for Multiple key performance indicators (MKPI), and Bayes theorem is key solution in model complexity controlling. The closed-loop structure is built to keep the stability of MKPI prediction model and the weight analysis method is the basis of identifying the key factors for CT and EU. Compared with Artificial neural network (ANN) and Selective naive Bayesian classifier (SNBC), the simulation results of the prediction method of BNN are proved to be more feasible and effective. The prediction accuracy of BNN has been obviously improved than ANN and SNBC.
Abstract: The conventional Kalman filter (KF) which uses the current measurement to estimate the current state is a posterior estimation. KF is identified as the optimal estimation in linear models with Gaussian noise. However, the performance of KF with incomplete information may be degraded or diverged. In order to improve the performance of KF, an Amended KF (AKF) is proposed by using more posterior measurements. The principle, derivation and recursive process of AKF are presented. The differences among Kalman smoother, adaptive fading method and AKF are analyzed. The simulation results of target tracking with different covariance of motion model indicate the high precision and robustness of AKF.
Abstract: We propose a fast satellite selection algorithm for multi-constellation, which is based on both Newton's identities for Geometric dilution of precision (GDOP) fast computation and optimal satellite geometric distribution for less cycle participation. An effective closed-form formula is proposed for GDOP approximation, avoiding conventional matrix inversion and complexity. We reduce computational cycles with auxiliary optimal satellite geometric distribution, which improves the efficiency of real-time positioning under the combined action of both. Simulation results indicate that we can save more than 44% and 99% computational complexity when selecting 7 satellites from 10 and 30 visible satellites respectively. The GDOP of proposed method is very close to optimal result, better than that of Quasi-Optimal algorithm.
Abstract: An improved algorithm based on Multi-agent particle swarm (MAS) is proposed to solve the distribution network reconfiguration problem in this paper. The approach is a combination of the learning, competition and cooperation mechanism of multi-agent technology and the strategies of Particle swarm optimization (PSO) algorithm. Using the Von Neumann topology structure in PSO algorithm, each particle represents an agent; each agent not only competes and cooperates with its neighborhood, but also absorbs the evolutionary mechanism of PSO algorithm, so as to share the information with the agent of global optimal. The rules of particle renovating reduce unfeasible solution in the process of particle renovating, and it is able to converge to global optimal accurately and quickly. Test on the IEEE 16-node, 32-node and 69-node system shows both a rapid convergence and a good robustness of this proposed approach.