Abstract: As a meta-heuristic approach, Ant colony optimization (ACO) has many applications. In the algorithm selection of pheromone models is the top priority. Selecting pheromone models that don't suffer negative biases is a natural choice. Specifically for the travelling salesman problem, the first order pheromone is widely recognized.When come across travelling salesman problem, we study the reasons for the success of ant colony optimization from the perspective of pheromone models,and unify different order pheromone models. In tests, we have introduced the concept of sample locations and the similarity coefficient to pheromone models. The first order pheromone model and the second order pheromone model are compared and are further analysed. We illustrate that the second order pheromone model has better global search ability and diversity of population than the former. With appropriate-scale travelling salesman problems, the second order model performs better than the first order pheromone model.
Abstract: Linked open data (LOD) supports the SPARQL query strongly. A translation system from the natural language query to the SPARQL query based on the syntax rules is proposed. For a natural language query, a parsing method is proposed to represent the query intention and construct the corresponding query graph. The algorithms for obtaining and instantiating triple patterns are designed based on the rules. A mapping method for different types of graph nodes is lastly proposed to improve the recall. The experiments based on test data from QALD-4 are conducted. Compared with the other systems, our system is more easy and effective, and evaluation results are outstanding in the field of unsupervised learning.
Abstract: Juels and Sudan proposed in 2002 an algorithm for computing a fuzzy vault that binds a user's biometric template with his secret. It was suggested that this vault could be used to securely store one's secret or a cryptographic key without losing his biometric information. However, in this classical fuzzy vault, if an attacker captures multiple vaults generated from one biometric template, he is able to obtain some biometric template information by cross-matching, and then he can use it to illegally recover the secret message. To overcome this disadvantage, in this paper, a cancelable fuzzy vault algorithm is proposed based on the user's transformed fingerprint features which are used to generate a fuzzy vault. Our novel fuzzy vault is secure and can overcome the cross-matching attack without intensive computational complexity. Also, the use of three check values makes our vault have a much higher probability to detect a false query fingerprint template than some other vault versions, and it will highly improve the probability whether the reconstructed polynomial is correct or not.
Abstract: To deeply understand neuromorphic electronic systems, a basic chemical synapse model is given by logistic Euler inverse mapping and transformation in tandem, 3T-MOS triod-trigger-topology (3T3) is designed in sub-threshold region at 100mV supply for compact synaptic circuit, and the power consumption of synapse in brain-circuitry (ion-bump-city) is compared with its five mimic blocks of HfOx/AlOx typed memristor, modified 2T-MOS-K+-bump circuit, 3T3 synaptic circuit and 4T-Schmitt-oscillator based synaptic circuit, and Chua's circuit on Power-delay-products' (PDP) levels. Above works will match searching for novel feature operators during pushing forward brain health microelectronics.
Abstract: An efficient multi-rate encoder for IEEE 802.16e LDPC codes which outperforms current single rate encoders with acceptable hardware consumption and efficient memory consumption is proposed. This design utilizes the common dual-diagonal structure in parity matrices to avoid the inverse matrix operation which requires extensive computations. Parallel Matrix-vector multiplication (MVM) units, bidirectional operation and storage compression are applied to this multi-rate encoder to increase the encoding speed and significantly reduce the quantity of memory bits required. The proposed encoding architecture also contributes to the design of multi-rate encoders whose parity matrices are dual-diagonally structured and have an Approximately lower triangular (ALT) form, such as in IEEE 802.11n and IEEE 802.22. Simulation results verified that the proposed encoder can efficiently work for all code rates specified in WIMAX standard. With a maximum clock frequency of 117 MHz, the encoder achieves 3 to 10 times higher throughput than prior works. The proposed encoder is capable to switch among six rates by adjusting the input parameter and it achieves the throughput up to 1Gbps.
Abstract: This research mainly expounds upon the decision-level software defect prediction theory. The defect characteristics is the first research focus. For the first research focus, a characteristic comparison set is built out of the existing defect characteristics according to the dissimilarity of defect characteristics and the defect characteristics are organized outside the characteristic comparison set into some defect characteristic clusters to reduce the scale of the characteristic data. The defects is the second research focus. For the second research focus, the vector weights are assigned to the defect characteristics contained in the defects according to the minimum critical characteristic set. Moreover, the multi-agent algorithm integration technology is used to predict defects according to the repulsive relationship between similar defect clusters.
Abstract: With the widespread utilization of multicore processors, operating systems face new challenges on scalability. The lock mechanism used by current monolithic OSs makes some critical modules encounter performance bottlenecks caused by lock contention. The contention in Linux process lifecycle management results in process Creation, execution and termination (CET) scaling poorly with core count increasing. Differing from the centralized fashion of Linux, this paper presents a Distributed process management model (DPMM) on microkernel for better scalability. The kernel functions are distributed into message passing servers running on different cores. DPMM splits the shared data including Process control blocks (PCBs), page pools and buffer caches into these cooperating servers. The actions including PCB creation/deletion, page allocation/freeing, executable loading and scheduling can be parallelized without locking. Our micro-benchmarks show that DPMM achieves nearly linear performance growth on process CET and outperforms Linux and Minix3 on a 32-Core machine.
Abstract: Key exchange protocols using both smart card and password are widely used nowadays since they provide greater convenience and stronger security than protocols using only a password. Most of these protocols are often limited to simple network systems, and they may have security risks. We propose a general construction for key exchange protocols using smart card and password to avoid these flaws. The constructed protocol from the general construction has only one additional communication round than the original public encryption scheme. This construction is proven secure under random oracle model, so it can resist several common types of attacks. It is also adapted well to various networks. Compared with related protocols, the proposed key exchange protocol generated from the general construction has better secure properties and good computational efficiency in storage cost and operation time.
Abstract: A program, when being executed, acts like a mirror that produces mirror images for objects in front of it. A mirror distinguishes itself from others by the way how it changes the shape of an object. A program can be characterized by the way how the final values of variables (the mirror images) are related to the initial values of variables (the objects). Axioms that specify relationships between final values and initial values of variables in a program are proposed. The logic to be discussed in this paper serves to prove and to deduce properties based on given axioms. Both the axioms and the logic belong to a theory, namely the mirror theory of programming, in which a program appears as an operation expression set. It is a step forward towards verified software.
Abstract: Due to the popularity of mobile internet and location-aware devices, there is an explosion of location and trajectory data of moving objects. A few proposals have been proposed for privacy preserving trajectory data publishing, and most of them assume the attacks with the same adversarial background knowledge. In practice, different users have different privacy requirements. Such non-personalized privacy assumption does not meet the personalized privacy requirements, meanwhile, it looses the chance to achieve better utility by taking advantage of differences of users' privacy requirements. We study the personalized trajectory k-anonymity criterion for trajectory data publication. Specifically, we explore and propose an overall framework which provides privacy preserving services based on users' personal privacy requests, including trajectory clustering, editing and publication. We demonstrate the efficiency and effectiveness of our scheme through experiments on real world dataset.
Abstract: Focused on the issue that division is complex and needs a long latency to compute, a method to design the unit of high-performance Floating-point (FP) divider based on Goldschmidt algorithm was proposed. Bipartite reciprocal tables were adopted to obtain initial value of iteration with area-saving, and parallel multipliers were employed in the iteration unit to reduce latency. FP divider to support pipeline execution with the control of state machine is presented to increase the throughput. The design was implemented in Digital signal process (DSP) chip by sharing the existed multipliers.
Abstract: We introduce the concepts of syntactic monoids of formal power series through syntactic congruences on a free monoid, and we study recognition of formal power series by monoids, as well as the basic properties of syntactic congruences and syntactic monoids. We also prove that syntactic monoids of formal power series are sub-direct products of syntactic monoids of crisp cut series. We present the Myhill-Nerode theorem for formal power series and provide some precise characterizations for regular series and its syntactic monoid. We show an Eilenberg-type theorem for formal power series, we establish a bijective correspondence among varieties of regular series, varieties of regular languages and varieties of monoids.
Abstract: Singular value decomposition (SVD) is a tool widely used in data denoising, matrix approximation, recommendation system, text mining and computer vision. A majority of applications prefer sparse singular vectors to capture inherent structures and patterns of the input data so that the results are interpretable. We present a novel penalty for SVD to achieve sparsity. Comparing with the traditional penalties, the proposed penalty is scale, dimensional insensitive and bounded between 0 and 1, which are in favor of controlling sparsity. Regulated by the penalty, we provide an efficient algorithm to project a vector onto a given sparse level in O(n) expected time. The efficient projection algorithm serve as a drudge for sparse SVD (SSVD). In experiments, SSVD is efficient and could capture the latent structures and patterns of the input data.
Abstract: We proposed a verifiable multi-secret sharing scheme without a dealer. We use cellular automata, having the properties of linear computations and parallel computations, to construct our scheme. The linear computational property makes it possible to build a scheme without the assistance of a trusted dealer, and the parallel computational property contributes to the high efficiency of the scheme. Hash function is used to realize the verification of shares, which also makes our scheme more efficient. The security of the scheme is also analyzed.
Abstract: Different patterns in one object will cause unequal saliency degree which makes it hard to highlight the object region uniformly. We propose a salient region detection method which mainly includes image abstraction, saliency calculation and integration. Under the detection framework, the hierarchical spatial information is introduced to improve the performance. The image abstraction with "pixel level" spatial information is applied to capture some meaningful elements. The local contrast is calculated with the "element level" spatial information. The "object level" spatial information is represented as compactness and background possibility, which further help to better pop out the object region and suppress the background. The results show that our method has a good performance even though the object consists of complex patterns.
Abstract: Most of Voice activity detection (VAD) methods are based on statistical model. In these methods, the noise signal is always assumed to satisfy and characterized by Gaussian distribution, while the assumption of noise does not always hold in practice and which causes that these kinds of method fail to distinguish speech from noise at low Signal-noise-ratio (SNR) level in non-stationary noise condition. For going further to improve the robustness of VAD, a enhanced speech based method is proposed. In the proposed method, the Laplacian distribution is used to model the remained noise since we find that the remained noise in enhanced speech satisfy Laplacian distribution; in addition, Gaussian mixture model is used to characterize the Discrete Fourier transform (DFT) coefficients of reconstructed speech in enhanced speech. Experimental results show that the proposed method performs better than the baseline method, especially in low SNR and non-stationary noise conditions.
Abstract: The joint block diagonalization algorithm is instrumental to the convolution blind signal separation. However, the nonorthogonal joint block diagonalization algorithm based on a Weighted least-squares (WLS) criterion sometimes converges to the degenerate or trivial solution. It is simple to avoid the trivial solution by adding some constraint on the de-mixing matrix. But the degenerate solution is hard to be excluded. This paper probes the phenomenon of the degenerate solution. By adding a new penalty function to the WLS criterion, an efficient algorithm which can avoid the degenerate solution is developed. Through the theoretical derivation and numerical simulation, we can make the conclusion that the proposed algorithm is more robust and not sensitive to the square or non-square mixing matrix.
Abstract: The dual-linear polarization radar is what China will popularize gradually following the Doppler radar technology. Following the Doppler radar technology, the dual-linear polarization radar, which has been widely applied to enhancing modem weather radar system, becomes more and more popular in China. Microphysical characteristics of hydrometeors, such as differential reflectivity, specific differential phase, and zero-lag correlation coefficient, are usually provided by the radar. Brought by the angular orientation of the feed, drop canting, and backscatter depolarization, however, inevitable measurement errors exit because of the unwanted horizontal and vertical wave coupling. Since small errors in these sensitive parameters may lead to erroneous judgment, both the factors themselves and their weights of influence should be studied specifically, with the mode of simultaneous transmission and reception of horizontally and vertically polarized waves. The study provides theoretical basis for the design of dual polarization Doppler radar, and will be helpful in the research of cloud, precipitation physics, and weather modification.
Abstract: To detect copy-paste tampering, an improved SIFT (Scale invariant feature transform)-based algorithm was proposed. Maximum angle is defined and a maximum angle-based marked graph is constructed. The marked graph feature vector is provided to each SIFT key point via discrete polar coordinate transformation. Key points are matched to detect the copy-paste tampering regions. The experimental results show that the proposed algorithm can effectively identify and detect the rotated or scaled copy-paste regions, and in comparison with the methods reported previously, it is resistant to post-processing, such as blurring, Gaussian white noise and JPEG recompression. The proposed algorithm performs better than the existing algorithm to dealing with scaling transformation.
Abstract: A new statistical analysis model is proposed to analyze the Affine projection algorithm with direction error (AP-DE). Four assumptions are made, which are based on the direction vector for the AP-DE algorithm. Under these assumptions, deterministic recursive equations for the weight error and for the mean-square error are derived. We also analyze the steady-state behavior of the AP-DE algorithm. Simulation results are provided to corroborate the analytical results.
Abstract: We propose a new variational model to reduce the staircase that often appears in Total variation (TV) based models in image denoising. The model uses BV-seminorm and Besov-seminorm to measure the piecewise constant component and piecewise smooth component of the image, respectively. We discuss the nontrivial property of the proposed model and introduce an alternating iteration algorithm that combines the dual projection algorithm with Wavelet soft thresholding (WST) algorithm to solve the model numerically. The experimental results show that the proposed model is effective for noise removal and staircase reduction, while the contour can be preserved in the denoised images. Furthermore, compared with two classical staircase reduction models, CEP2 and TGV, the proposed model is much faster than these two models.
Abstract: As a promising technique, sparse coding can be widely used for representation, compression, denoising and separation of signals. This technique has been introduced into noisy speech processing, where enhancing speech itself or speech feature remains a challenge. Unlike other fields where noises are dense, the noises in speech are often sparse or partly sparse over the speech dictionary, resulting in performance degradation. It is necessary to understand the noise conditions of speech environments and the applied range of sparse coding. This paper analyzes the assumptions of sparse coding and provides the bounds of reconstruction error for two sparse coding methods which are widely used. Based on this analysis, the performance of the two methods under different conditions are compared. The results show that the performance of sparse coding can be improved by a well-prepared noise dictionary. Experiments on speech enhancement and recognition are conducted, and the results coincide with the theoretical analysis well.
Abstract: Catering to the public nature of Ad hoc network in open channel and data communication being easily eavesdropped, this paper proposed an anti-eavesdropping algorithm which is based on network coding. The algorithm is based on the RSA signature algorithm introducing the timestamp and homomorphic mechanism to detect tampering and replay attacks, as the basis for calculating safety, being used as one of measurement indicators in router by node safety to establish t pieces of routing entry. It generates encoding vectors by introducing the random number when the source node is encoded and the random number will be divided into n pieces. As long as the sink node receives t pieces of fragments, we can restore the original encoding vector and decode it. If the eavesdropper wiretaps are less than t, they can not get any meaningful information. It is designed for increasing the number of linearly independent coded packets.It can improve the decoding efficiency by adopting generational grouping strategy while being encoded. Simulation and theoretical analysis shows that the anti-eavesdropping algorithm based on network coding improves network performance and security by coding gain.
Abstract: In the application of Wireless sensor networks (WSNs), effective estimation for link quality is a basic issue in guarantying reliable data transmission and upper network protocol performance. A link quality estimation mechanism is proposed, which is based on Support vector machine (SVM) with multi-class classification. Under the analysis of the wireless link characteristics, two physical parameters of communication, Receive signal strength indicator (RSSI) and Link quality indicator (LQI), are chosen as estimation parameters. The link quality is divided into five levels according to Packet reception rate (PRR). A link quality estimation model based on SVM with decision tree is established. The model is built on kernel functions of radial basis and polynomial respectively, in which RSSI, LQI are the input parameters. The experimental results show that the model is reasonable. Compared with the recent published link quality estimation models, our model can estimate the current link quality accurately with a relative small number of probe packets, so that it costs less energy consumption than the one caused by sending a large number of probe packets. So this model which is high efficiency and energy saving can prolong the network life.
Abstract: The frequency-domain approach is widely used in Global positioning system (GPS) receiver tracking loop performance analysis. In realization, the Digital phase-locked loop (DPLL) is used. There is difference between phase errors by traditional frequency-domain analysis and real receiver output because of discretization errors. In order to get more precise result, this paper uses time-domain difference equations to analyze the phase tracking errors under the influence of thermal noise and the dynamic stress on different orders of PLL. The simulation results show that the traditional frequency-domain approach has phase error deviation under long integration time (i.e., more than 10ms), while the time-domain approach has good accordance with receiver simulation output. Thus, the proposed time-domain approach can provide more precise solution for PLL performance analysis.
Abstract: Data wiping is a useful technology which can prevent possible data recovery in a file system. But the growth of the amount of data usually leads to a decline in wiping efficiency. In order to make improvement, a novel data wiping scheme for Ext4 file system DWSE is proposed. It includes two proposed algorithms for wiping files and free space adaptively. According to a rate of rest blocks which is specified by users, the file wiping algorithm WFile tries to clean part of a selected file for saving time. The free space wiping algorithm WFree tries to speed up the process of cleaning dirty blocks by employing a random sampling and hypothesis testing method with two adjustable rates which represent status and content of a block group. A journal cleaning algorithm CleanJ is also proposed which tries to clean old records by creating and deleting temporary files for preventing data recovery from a journal file. With the help of parameters, users can wipe their data in a balanced way between security and efficiency. Several experiments are performed on our scheme. The experimental results show that our scheme can wipe files and free space in different security and efficiency with different parameters. Our scheme can achieve higher security and efficiency than other two data wiping schemes.
Abstract: Security issues of spectrum sensing have drawn a lot of attentions in Cognitive radio networks (CRNs). Malicious users can mislead the network to make wrong decision about the states of channels by tampering spectrum sensing data. To defense against Spectrum sensing data falsification (SSDF) attack, we propose a neighbor detection-based spectrum sensing algorithm in distributed CRNs, which can detect attackers with the help of neighbors during spectrum sensing to improve the accuracy of decision making. The proposed scheme can also guarantee the connectivity of the network. Simulation results illustrate that the proposed scheme can defense against SSDF attacks effectively and reach the unified information of spectrum sensing data.
Abstract: We propose a lightweight trust system for the clustered wireless sensor networks based on the mutual evaluation between the cluster heads and sensor nodes. We evaluate the trust level of a cluster head in two aspects, namely, trust level as a service provider and a supervisor. We consider multidimensional trust attributes to compute the global trust value of a node. By means of the Petri net (PN), we illustrate the performance of an entity in our trust model. Theoretical analyses as well as simulations are done and the results showed that our model with linear computational complexity had less memory and communication overhead when compared with the current state-of-art trust management scheme. In addition, our model can detect malicious and selfish entities (especially the malicious cluster head) and resist various attacks efficiently.
Abstract: For classical template attacks, many papers suggested a guideline of choosing interesting points which is still not proven. Up to now, many different methods of choosing interesting points were introduced. However, it is still unclear that which approach will lead to the best classification performance for template attacks. We comprehensively evaluate and compare the classification performance of template attacks when using different methods of choosing interesting points. Evaluation results show that the Correlation power analysis based method and the Sum of squared pairwise t-differences based method will lead to the best classification performance. We find that some methods of choosing interesting points provide the same results in the same circumstance. Finally, we correctly prove the guideline of choosing interesting points is correct by presenting a new way of conducting template attacks.
Abstract: In Large scale antenna systems (LSAS), the user number is usually much less than the number of base station antennas. By leveraging this characteristic, Modular precoding based on zero forcing (MZF) is proposed to achieve higher spectrum efficiency than Maximum ratio combining (MRC), while preserve the advantage of distributive processing. The achievable rates of the MZF, MRC and Zero forcing (ZF) are derived under the generalized model of imperfect Channel state information (CSI). The imperfect CSI model includes the channel estimation delay, the noise in the pilot channel, and the pilot contamination. The analytical achievable rates match the simulations even with a small number of base station antennas. When the antenna number approaches to infinity, the analytical achievable rates of MZF converge to the analytical achievable rates of ZF.
Abstract: Recent advances point out that the existing community detection methods commonly face two challenges:incorrect base-structures and incorrect membership of weak-ties. To overcome both problems, a Local link structure (LLS) clustering based method for overlapping community detection is proposed. We extend the similarity of a pair of links to a group of links named LLS, and thus transform mining LLSs as a pattern mining problem. We prove that LLS with an appropriate threshold can filter weak-ties in the form of bridge and local bridge with its span being larger than 3. A compositive framework is presented for overlapping community detection based on LLS mining and clustering. Comparative experiments on both synthetical and real-world networks demonstrate that our method has advantage over six existing methods on discovering higher quality communities.
Abstract: This paper focuses on selecting the appropriate filtering location to minimize the amount of filtering routers in the traceback-based packet filtering for defending against the large-scale Bandwidth denial-of-service (BDoS) attacks. The filtering location can be viewed as a resource allocation problem and further we formulate it to an integer linear programming model and design an exact and computationally efficient filtering location algorithm. The evaluation results show that our algorithm brings significant benefits in practice.