Abstract: In our study, the active learning and semi-supervised learning methods are comprehensively used for label delivery of proteins with known functions in Protein-protein interaction (PPI) network so as to predict the functions of unknown proteins. Because the real PPI network is generally observed with overlapping protein nodes with multiple functions, the mislabeling of overlapping protein may result in accumulation of prediction errors. For this reason, prior to executing the label delivery process of semi-supervised learning, the adjacency matrix is used to detect overlapping proteins. As the topological structure description of interactive relation between proteins, PPI network is observed with party hub protein nodes that play an important role, in co-expression with its neighborhood. Therefore, to reduce the manual labeling cost, party hub proteins most beneficial for improvement of prediction accuracy are selected for class labeling and the labeled party hub proteins are added into the labeled sample set for semi-supervised learning later. As the experimental results of real yeast PPI network show, the proposed algorithm can achieve high prediction accuracy with few labeled samples.
Abstract: The sentiment classification of Chinese Microblog is a meaningful topic. Many studies has been done based on the methods of rule and word-bag, and to understand the structure information of a sentence will be the next target. We proposed a sentiment classification method based on Recurrent neural network (RNN). We adopted the technology of distributed word representation to construct a vector for each word in a sentence; then train sentence vectors with fixed dimension for different length sentences with RNN, so that the sentence vectors contain both word semantic features and word sequence features; at last use softmax regression classifier in the output layer to predict each sentence's sentiment orientation. Experiment results revealed that our method can understand the structure information of negative sentence and double negative sentence and achieve better accuracy. The way of calculating sentence vector can help to learn the deep structure of sentence and will be valuable for different research area.
Abstract: By constructing three types of related-key differential characteristics, we present three corresponding related-key differential attacks on the cipher. As the independence of the characteristics, we could recover 64 bits of the cipher's master key with 258.6 chosen plain-texts, 258.8 full-round DDP-64 encryptions and 212.8 bits of storage resources. To break the cipher, we only need to implement an exhaustive search for the rest 64 bits of the master key.
Abstract: Answer extraction (AE) is one of the key technologies in developing the open domain Question & answer (Q&A) system. Its task is to yield the highest score to the expected answer based on an effective answer score strategy. We introduce an answer extraction method by Merging score strategy (MSS) based on hot terms. The hot terms are defined according to their lexical and syntactic features to highlight the role of the question terms. To cope with the syntactic diversities of the corpus, we propose four improved candidate answer score algorithms. Each of them is based on the lexical function of hot terms and their syntactic relationships with the candidate answers. Two independent corpus score algorithms are proposed to tap the role of the corpus in ranking the candidate answers. Six algorithms are adopted in MSS to tap the complementary action among the corpus, the candidate answers and the questions. Experiments demonstrate the effectiveness of the proposed strategy.
Abstract: In binary Region incrementing visual cryptography schemes (RIVCSs), the secrets of multiple secrecy regions can be gradually revealed by human visual system. A characteristic of the existing binary RIVCSs different from traditional binary Visual cryptography schemes (VCSs) is that, the contrasts for different revealed regions are different while traditional binary VCSs have same contrast. To keep the quality (contrast) of recovered image compatible with the traditional VCS, we use integer linear programming to design a binary (k,n)-RIVCS with same contrast for all secrecy regions in this paper. Experimental results demonstrate that our method is feasible and effective. The trade-off is that our scheme involves a larger pixel expansion.
Abstract: Bird strikes present a huge risk for air vehicles, especially since traditional airport bird surveillance is mainly dependent on inefficient human observation. For improving the effectiveness and efficiency of bird monitoring, computer vision techniques have been proposed to detect birds, determine bird flying trajectories, and predict aircraft takeoff delays. Flying bird with a huge deformation causes a great challenge to current tracking algorithms. We propose a segmentation based approach to enable tracking can adapt to the varying shape of bird. The approach works by segmenting object at a region of interest, where is determined by the object localization method and heuristic edge information. The segmentation is performed by Markov random field, which is trained by foreground and background mixture Gaussian models. Experiments demonstrate that the proposed approach provides the ability to handle large deformations and outperforms the most state-of-the-art tracker in the infrared flying bird tracking problem.
Abstract: In order to provide a secure, reliable and flexible way to hide information, a new attribute-based signcryption scheme based on ciphertext-policy and its security proof are presented. This scheme not only can simultaneously fulfil both authentication and confidentiality in an efficient way, but also implements a hierarchical decryption in one group and also between different groups according to user's authority (different users satisfying the same access structure can be considered as a group). We provide a solution to information hiding using our proposed scheme which can embed ciphertext into a carrier. Because the hierarchical decryption property, different users will obtain different message from the same carrier. Illegal user can not get any information without private key because message existed in the carrier is ciphertext. Such solution can be applied in sharing important message under the public network.
Abstract: It gets more and more important for video copyright protection, it is necessary to provide efficient H.264 compression domain watermarking for video Digital rights management(DRM). A new watermarking method based on H.264 compressed domain is proposed for video DRM, in which the embedding and extracting procedure are performed using the syntactic elements of the compressed bit stream. In this way, complete decoding is unnecessary in both embedding and extracting processes. Based on the analysis of the time and space, some appropriate sub-blocks are selected for embedding watermarks, increasing watermark robustness while reducing the declination of the visual quality. In order to avoid bit-rate increasing and strengthen the security of the proposed video watermarking scheme, only a set of nonzero coefficients quantized in different parts of macroblocks is chosen for inserting watermark. The experiment results show the proposed scheme can achieve excellent robustness against some common attacks, the proposed scheme is secure and efficient for video content DRM protection.
Abstract: Recent years have witnessed a rapid growth in using Web services for data publishing and sharing among organizations. To improve the efficiency of software development and economize on human and material resources, service reuse is viewed as a powerful means which will not only reuse atomic services, but also reuse arbitrary granularities of Service process fragments (SPFs). However, effectively reusing arbitrary granularities of SPFs has not been solved yet, let alone taking diverse QoS preferences of service providers and users into account. In this paper, we propose a novel method of SPF reuse, named SCKY, based on the Cocke-Kasami-Younger (CKY) algorithm. We first present an extended CKY to do SPF-query. Then we address how to do SPF-query by a probability CKY, i.e., return a SPF with maximum emergence probability. Finally, we explore the SPF-query with a consensus of QoS preferences between service providers and users. Through a set of experiments, the effectiveness and robustness of our approach are evaluated.
Abstract: Many developer recommendation techniques have been developed in the literature. Among existing studies, most of them are performed based on exploring the historical commit repository. The thought behind them is that developers who submit similar historical commits relevant to the incoming issue are more probably to be the candidates for the current issue resolution. But whether such a thought is always useful for developer recommendation? This paper aims at this problem by conducting a set of empirical studies on four real open-source projects. The results show that, 1) historical commit messages do well reflect the historical experience of the maintenance task of developers and can be used for developer recommendation in most of the time; 2) the number of historical commits submitted by the recommended developer(s) and the similarity value used to select the relevant historical commits should be carefully considered to recommend developers for issue resolution; 3) The efficiency of issue resolution process can be improved if some associated source code files relevant to this issue can be also recommended; and 4) developer recommendation techniques that rank the recommended developers based on the times of co-changed source code files cannot always produce correct recommendation results.
Abstract: Community question answering (CQA) has provided an increasingly popular service where users ask and answer questions and access historical question-answer pairs. As a fundamental task in CQA, question similarity measure is to compute the similarity between the queried question and the historical questions which have been solved by other users. We mine and use the most important semantic features as the semantic representation of questions, and try to incorporate the couplings of semantic features into vector space model. We propose Coupled question similarity (CQS) model, and compute the similarity in matrix factorization framework. Experiments conducted on real CQA data sets demonstrate that with the incorporation of such couplings, the performance of sentence similarity is improved compared to a variety of baseline methods significantly.
Abstract: The optimal design of GaN-based Light-emitting diode (LED) is important for its reliability. In this work, a new three-Dimensional (3D) circuit model with a resistor network is developed to study the current distribution in the active layer of vertical conducting GaN-based LED grown on Si(111) substrate with different structures and electrode patterns. It consists of resistance of Transparent conductive layer (TCL), resistance of epitaxial layer, intrinsic diodes presenting the active layer, and AlN/Si junction as which the multilayer of AlN/Si is assumed. Simulations results of current distribution in active layers of two kinds of LED structures show that current distribution uniformity is greatly affected by the electrode pattern and the LED structure. Furthermore, the experimentally measured light emission uniformity agrees well with simulation results. The electrical and optical characteristics of LED are obviously affected by the current distribution uniformity.
Abstract: For an odd prime p which is congruent to 3 module 4 and an odd integerk, we investigate the upper bound on the magnitude of cross correlation values of a p-Ary m-sequence s(t) and its decimated sequences s(dt+l) for a decimation value d. Using the above upper bound of the magnitude of cross correlation values of p-Ary m-Sequence and its decimated sequences, we construct a new class of p-Ary sequence families with low correlation property via m-Sequence.
Abstract: In this paper, an Entropy-constrained dictionary learning algorithm (ECDLA) is introduced for efficient compression of Synthetic aperture radar (SAR) complex images. ECDLA_RI encodes the Real and imaginary parts of the images using ECDLA and sparse representation, and ECDLA_AP encodes the Amplitude and phase parts respectively. When compared with the compression method based on the traditional Dictionary learning algorithm (DLA), ECDLA_RI improves the Signal-to-noise ratio (SNR) up to 0.66dB and reduces the Mean phase error (MPE) up to 0.0735 than DLA_RI. With the same MPE, ECDLA_AP outperforms DLA_AP by up to 0.87dB in SNR. Furthermore, the proposed method is also suitable for real-time applications.
Abstract: One of the main difficulties in Acoustic echo cancellation (AEC) is that the filter adaptation needs to vary according to different situations such as near-end interferences and echo path changes. In this paper, we propose a robust step-size control algorithm in frequency domain. The proposed method is based on the optimization of the square of the bin-wise a posteriori error. Constraint on the filter update is applied, which contributes to robustness to near-end interferences of the algorithm. The learning rate formula is derived first and then the relationship between the proposed algorithm and a robust statistics based approach is revealed. The method is extended to the Multidelay block frequency domain adaptive filter (MDF) so as to meet the demand of low delay in practical application. Moreover, the values of the constraints are designed to be updated proportionately to improve the convergence property. Simulation results demonstrate the superiority of the proposed algorithm.
Abstract: Heavy noises distribute in the images when imaging in a poor environment. The randomness of noises makes pixels distributing singularly, which weakens the 1-D piecewise smooth property of original scenes. Thus, wavelets-based compression method no longer works well. In this paper, a layer segmentation based compression scheme is proposed for gray images. Image textures and some high frequency noises are described in a high frequency layer while the coarse part of the image is described in the low frequency layer. The high frequency layer is represented by a joint dictionary, and the low frequency layer is coded with the traditional wavelets. The proposed scheme is tested on nature images and synthetic images. The results show that the proposed scheme achieves better rate-distortion performance compared with several competing compression systems. Besides, further degradation of edges is avoided by the proposed compression scheme.
Abstract: This paper presents an efficient visual tracking framework which is robust to rotation, scale variation and occlusion. The target template is characterized by Local binary patterns (LBP), which exhibit invariance to rotation. The LBP features are then integrated into the Normalized moment of inertia (NMI) to decide whether the template requires update. This procedure enables an adaptive template matching strategy which addresses the tracking failures arising from scale variations. Kalman filtering is exploited for predicting the trajectory of the target when it is occluded. The matching efficiency is achieved by applying a locally pyramid searching scheme. Experimental results validate the efficiency and effectiveness of our tracking framework.
Abstract: To overcome low accuracy and high false positive of existing computer-aided lung nodules detection. We propose a novel lung nodule detection scheme based on the Gestalt visual cognition theory. The proposed scheme involves two parts which simulate human eyes' cognition features such as simplicity, integrity and classification. Firstly, lung region was segmented from lung Computed tomography (CT) sequences. Then local three-dimensional information was integrated into the Maximum intensity projection (MIP) images from axial, coronal and sagittal profiles. In this way, lung nodules and vascular are strengthened and discriminated based on pathologic image characteristics of lung nodules. The experimental database includes fifty-three high resolution CT images contained lung nodules, which had been confirmed by biopsy. The experimental results show that, the accuracy rate of the proposed algorithm achieves 91.29%. The proposed framework improves performance and computation speed for computer aided nodules detection.
Abstract: Maximum correntropy criterion (MCC) provides a robust optimality criterion for non-Gaussian signal processing. In this paper, the weight update equation of the conventional MCC-based adaptive filtering algorithm is modified by reusing the past K input vectors, forming a class of data-reusing MCC-based algorithm, called DR-MCC algorithm. Comparing with the conventional MCC-based algorithm, the DR-MCC algorithm provides a much better convergence performance when the input data is correlated. The mean-square stability bound of the DR-MCC algorithm has been studied theoretically. For both Gaussian noise case and non-Gaussian noise case, the expressions for the steady-state Excess mean square error (EMSE) of DR-MCC algorithm have been derived. The relationship between the data-reusing order and the steady-state EMSEs is also analyzed. Simulation results are in agreement with the theoretical analysis.
Abstract: In wireless mobile networks, group members join and leave the group frequently, a dynamic group key agreement protocol is required to provide a group of users with a shared secret key to achieve cryptographic goal. Most of previous group key agreement protocols for wireless mobile networks are static and employ traditional PKI. This paper presents an ID-based dynamic authenticated group key agreement protocol for wireless mobile networks. In Setup and Join algorithms, the protocol requires two rounds and each low-power node transmits constant size of messages. Furthermore, in Leave algorithm, only one round is required and none of low-power nodes is required to transmit any message, which improves the efficiency of the entire protocol. The protocol's AKE-security with forward secrecy is proved under Decisional bilinear inverse Diffie-Hellman (DBIDH) assumption. It is additionally proved to be contributory.
Abstract: Repeated memory copy during protocol translation inhibits capacity of a streaming media gateway. Unlike existing optimization techniques that rely on platform-specific features, this paper investigates algorithm-level platform-independent strategies. A mathematical concept of the buf-string is proposed to model the protocol transcoding process. Based on this model three payload extraction algorithms that can reduce memory copy are presented. The streaming gateway used in the Next-generation broadcasting (NGB) and the Next-generation on-demand (NGOD) system is taken as an example to demonstrate and evaluate our strategies. Experimental results from an x86 host and an embedded system prove that our strategies can reduce CPU overhead by 15% to 45%, and optimize the linear space complexity to a constant one.
Abstract: In existing metro systems, the train ground radio communication system for different applications are deployed independently. Investing and constructing the communication infrastructures repeatedly wastes substantial social resources, and it brings difficulties to maintain all these infrastructures. We present the communication Quality of service (QoS) requirement for different train ground radio applications. An integrated TD-LTE based train ground radio communication system for the metro system (LTE-M) is designed next. In order to test the LTE-M system performance, an indoor testing environment is set up. The channel simulator and programmable attenuators are used to simulate the real metro environment. Extensive test results show that the designed LTE-M system performance satisfies metro communication requirements.
Abstract: The authentication error in two-hop wireless networks is considered without knowledge of eavesdropper channels and location. The wireless information-theoretic security has attracted considerable attention recently. A prerequisite for available works is the precise distinction between legitimate nodes and eavesdroppers. However it is unrealistic in the wireless environment. Error is always existing in the node authentication process. Best of our knowledge, there are no works focus on solving this problem in the information-theoretic security. This paper presents a eavesdropper model with authentication error and two eavesdropping ways. Then, the number of eavesdroppers can be tolerated is analyzed while the desired secrecy is achieved with high probability in the limit of a large number of relay nodes. Final, we draw two conclusions for authentication error:1) The impersonate nodes are chosen as relay is the dominant factor of the transmitted message leakage, and the impersonation attack does seriously decrease the number of eavesdroppers can be tolerated. 2) The error authentication to legitimate nodes is almost no effect on the number of eavesdroppers can be tolerated.
Abstract: Due to the use of the cloud computing technology, the ownership is separated from the administration of the data in cloud and the shared data might be migrated between different clouds, which would bring new challenges to data secure creation, especially for the data privacy protection. We propose a User-centric data secure creation scheme (UCDSC) for the security requirements of resource owners in cloud. In this scheme, a data owner first divides the users into different domains. The data owner encrypts data and defines different secure managing policies for the data according to domains. To encrypt the data in UCDSC, we present an algorithm based on Access control conditions proxy re-encryption (ACC-PRE), which is proved to be master secret secure and Chosen-ciphertext attack (CCA) secure in random oracle model. We give the application protocols and make the comparisons between some existing approaches and UCDSC.
Abstract: With the booming of Human-centric multimedia networking (HMN), there are rising amount of human-made multimedia that needs to distribute to consumers with higher speed and efficiency. Hybrid distribution of Client/Server (C/S) and Peer-to-Peer (P2P) have been successfully deployed on the Internet and the practical benefits have been widely reported, while its theoretical performance remains unknown for mass data delivery unfortunately. This paper presents an analytical and experimental study on the performance of accelerating large-scale hybrid distribution over the Internet. In particular, this paper focuses on the user behavior in HMN and establishes a user behavior model based on the Kermack-McKendrick model in epidemiology. Analytical expressions of average delay in HMN are then derived based on C/S, P2P and hybrid distribution, respectively. Our simulation shows how to design and deploy a hybrid distribution system of HMN that helps to bridge the gap between system ultilization and quality of service, which provides direct guidance for practical system design.
Abstract: High energy-efficient wireless communication has become hot due to global low-carbon economy. The system's total transmitting power can be saved by resource allocation in cooperative spectrum sharing networks. In cooperative spectrum sharing, the secondary user relays the primary user's traffic, as a return, the secondary user is admitted to access the licensed spectrum for its own data transmission. An optimal power and time allocation are presented to minimize the overall energy consumption with the users' service quality guaranteed in cooperative spectrum sharing. We formulated it as a convex problem and achieve its optimal power and time allocation in closed form. We analyze the performance of two different transmission protocols for the single relay channel.
Abstract: Velocity measurement is a basic task of radars. The target velocity is usually estimated according to the Doppler frequency shift. While traditional Doppler methods are unsuitable for high-speed targets, since the serious range migration between adjacent echoes causes phase wrapping. The serious range migration also interferes the coherent integration to improve the accuracy of the velocity estimation. A velocity measurement method based on Keystone transform using entropy minimization is studied to solve this problem. This method applies Keystone transform to the echo to calculate the ambiguity degree with the help of entropy minimization. The proposed algorithm estimates the ambiguity degree with no error at a wider range of SNR than the traditional method. The ambiguous Doppler frequency is obtained according to the slow time. Theoretical analyses and simulations show that this method has very high precision.
Abstract: This paper develops a multistate degradation structure of the solder joints which can be used under various vibration conditions based on nonhomogeneous continuous-time hidden semi-Markov process. The parameters of the structure were estimated to illustrate the stochastic relationship between the degradation process and the monitoring indicator by using unsupervised learning methods. Random vibration tests on solder joints with different levels of power spectral density and fixed forms were conducted with a real time monitoring electrical resistance to examine the suitability of the model. It was experimentally verified that the multistate degradation structure matches the experimental process reasonably and accurately. Based on this multistate degradation model, the online prognostics of solder joint were analyzed and the results indicated that faults or failures can be detected timely, leading to appreciate maintenance actions scheduled to avoid catastrophic failures of electronics.
Abstract: A new method for Synthetic aperture radar (SAR) image denoising is proposed. The prior information of speckle statistical model can be exploited to judge its distribution. The basis of SAR image can be estimated by Independent component analysis (ICA), and these bases can be divided into two different subspaces (noise and real signal subspaces) through a linear classifier. Then parametric Bootstrap estimates the parameters of speckle statistical model on the noise signal subspace, and the nonparametric Bootstrap can estimate the distribution of real image on the real signal subspace. According to different results estimated by Bootstrap, corresponding Maximum a posterior probability (MAP) filter will be selected for image denoising, using the noise model's parameter for adaptive filtering. Experiments show that the image processed by this new method can achieve a better visual perception and objective evaluation results.