2015 Vol. 24, No. 1

Display Method:
An Asynchronous Adaptive Priority Round-Robin Arbiter Based on Four-Phase Dual-rail Protocol
YANG Yintang, WU Ruizhen, ZHANG Li, ZHOU Duan
2015, 24(1): 1-7.
Abstract(566) PDF(1772)
An asynchronous Adaptive priority round-robin arbiter (APRA) based on four-phase dual-rail protocol is proposed. Combining the advantages of synchronous and asynchronous circuits, it provides the required bandwidth allocation on basis of arbitration fairness and works in synchronous SoC. In the Nonidling and nonpreemptive (NINP) model, simulations and verifications are made. The results show that the speed of the proposed arbiter is improved by 18%-50.4% and the dynamic and static power is reduced by 8.4%-46.2% and 81.8%-90.9% respectively compared with the commonly-used Fixed priority (FP), Round-Robin (RR) and Lottery arbiters. The proposed arbiter is better in output bandwidth allocation and it also has advantages in speed and power. Furthermore, it is easy in reconfiguration, strong in practicability and low in complexity of system integration and suits various extreme communication traffics.
Load Balanced Coding Aware Multipath Routing for Wireless Mesh Networks
SHAO Xing, WANG Ruchuan, HUANG Haiping, SUN Lijuan
2015, 24(1): 8-12.
Abstract(459) PDF(1319)
The growth of network coding opportunities is considered the unique optimization goal by most current network coding based routing algorithms for wireless mesh networks. This usually results in flows aggregation problem in areas with coding opportunities, and degrades the network performance. This paper proposes a Load balanced coding aware multipath routing (LCMR) for wireless mesh networks. To facilitate the evaluation of discovered multiple paths and the tradeoffs between coding opportunity and load balancing, a novel routing metric, Load balanced coding aware routing metric (LCRM) is presented, which considers the load degree of nodes when detects coding opportunities. LCMR could spread traffic over multipath to further balance load. Simulation results demonstrate that LCMR could evenly spread the traffic over the network with increasing network throughput in a heavy load at the expense of some coding opportunities.
SE-FCA: A Model of Software Evolution with Formal Concept Analysis
SUN Xiaobing, LI Bixin, LI Bin, CHEN Ying
2015, 24(1): 13-19.
Abstract(533) PDF(1047)
Softwares are naturally evolved to cope with various changing system requirements. Software evolution includes a series of activities to analyze, assess, and validate the changes. This paper proposes an integrated software evolution model, namely SE-FCA, to support four core software evolution activities: program comprehension, change impact analysis, regression testing, and fault localization. These four activities are integrated and supported under the formal concept analysis technique, which is efficient to deal with the relation between entities and entity properties to provide a remarkable insight into the structure of the original relation. These activities are evaluated in a unified empirical environment. The empirical study shows the effectiveness of these activities under the SE-FCA model.
Failure Detection and Correction for Appearance Based Facial Tracking
WANG Lei, LIANG Yixiong, CAI Wangyang, ZOU Beiji
2015, 24(1): 20-25.
Abstract(446) PDF(937)
The appearance based facial tracking methods, such as active appearance models and candide models, are widely used in intelligent user interface and facial expression recognition. This paper proposes a novel method to detect and correct the failures in appearance based facial tracking. A sparse coding strategy is applied to learn an efficient feature representation for the difference between the warped image and the face template. The features are extracted by directly project the difference image to the space spanned by the dictionary of the sparse coding. An iterative regression based method is proposed to detect and correct the failures according to the features. Experimental evaluation on an open dataset shows a global performance improvement of the tracking algorithm.
An Analysis and Proof on Self-Similarity Property of Flash P2P Internet Video Traffic
JI Yimu, YUAN Yongge, HAN Zhijie, WANG Hao, HAN Lei, SUN Yanfei, WANG Ruchuan
2015, 24(1): 26-32.
Abstract(507) PDF(1987)
In order to study the amount of bandwidth resource loss caused by flash P2P technique, its influence on network load and its maintenance cost, the characteristic of Internet traffic of flash P2P is studied. On the basis of analysis of features of Real time media flow protocol (RTMFP), the traffic of flash P2P can be identified from the Internet traffic. By extraction and analysis of the traffic of the three largest online video content providers in China (Youku, Iqyi and Sohu Video) and the calculation of probability distribution of the traffic, it is found that the transmission time of flash P2P traffic and the transmission time interval obeys the heavy-tailed distribution. This accounts for the self-similarity of the flash P2P traffic. To better explain the phenomenon, the long-range dependent model is established based on the current traffic. By the established mathematical model, it is strictly proved that flash P2P traffic has self-similar characteristic.
Deployment of Sensors in WSN: An Efficient Approach Based on Dynamic Programming
LI Yongyan, GAO Wen, WU Chunming, WANG Yansong
2015, 24(1): 33-37.
Abstract(477) PDF(1244)
Efficient sensor node deployment is extremely important in wireless sensor networks. It earns great practical meanings through using fewer sensor nodes as far as possible to satisfy different requirements such as the requirement on coverage and overcoming the potential sensor node failures and the adverse influence from the environment. We propose an efficient approach for the deployment of sensor nodes in wireless networks, termed as EDSNDA, which is excellent in taking both the requirements of sensor coverage and network connectivity into consideration when minimizing the number of necessary sensor nodes to the best of its ability. We proposed a new coverage model of sensor node. Based on the sensor coverage model, we establish four dynamic programming models in four different practical situations, respectively. The algorithms are then proposed which are used for solving the corresponding dynamic programming models. The validity of the method is justified by simulation studies in which the method is compared with the current representative methods. The simulation results show that our method performs better than the other ones with fewer sensor nodes, better coverage and network connectivity result in the same circumstance.
Integrating Evolutionary Testing with Reinforcement Learning for Automated Test Generation of Object-Oriented Software
HE Wei, ZHAO Ruilian, ZHU Qunxiong
2015, 24(1): 38-45.
Abstract(476) PDF(1094)
Recent advances in evolutionary test generation greatly facilitate the testing of Object-oriented (OO) software. Existing test generation approaches are still limited when the Software under test (SUT) includes Inherited class hierarchies (ICH) and Non-public methods (NPM). This paper presents an approach to generate test cases for OO software via integrating evolutionary testing with reinforcement learning. For OO software with ICH and NPM, two kinds of particular isomorphous substitution actions are presented and a Q-value matrix is maintained to assist the evolutionary test generation. A prototype called EvoQ is developed based on this approach and is applied to generate test cases for actual Java programs. Empirical results show that EvoQ can efficiently generate test cases for SUT with ICH and NPM and achieves higher branch coverage than two state-of-the-art test generation approaches within the same time budget.
Test Data Generation for Multiple Paths Based on Local Evolution
YAO Xiangjuan, GONG Dunwei, WANG Wenliang
2015, 24(1): 46-51.
Abstract(460) PDF(1581)
Generating test data by genetic algorithms is a promising research direction in software testing, among which path coverage is an important test method. The efficiency of test data generation for multi-path coverage needs to be further improved. We propose a test data generation method for multi-path coverage based on a genetic algorithm with local evolution. The mathematical model is established for all target paths, while in the algorithm the individuals are evolved locally according to different objective functions. We can improve the utilization efficiency of test data. The computation cost can be reduced by using fitness functions of different granularity in different phases of the algorithm.
Finding Deceptive Opinion Spam by Correcting the Mislabeled Instances
REN Yafeng, JI Donghong, YIN Lan, ZHANG Hongbin
2015, 24(1): 52-57.
Abstract(527) PDF(1058)
Assessing the trustworthiness of reviews is a key in natural language processing and computational linguistics. Previous work mainly focuses on some heuristic strategies or simple supervised learning methods, which limit the performance of this task. This paper presents a new approach, from the viewpoint of correcting the mislabeled instances, to find deceptive opinion spam. Partition a dataset into several subsets, construct a classifier set for each subset and select the best one to evaluate the whole dataset. Error variables are defined to compute the probability that the instances have been mislabeled. The mislabeled instances are corrected based on two threshold schemes, majority and non-objection. The results display significant improvements in our method in contrast to the existing baselines.
The Browsing Pattern and Review Model of Online Consumers Based on Large Data Analysis
NING Lianju, WANG Haoyu, FENG Xin, DU Junping
2015, 24(1): 58-64.
Abstract(521) PDF(1032)
In the context of online shopping, commodity information and consumer reviews are main factors that will affect purchasing behavior. Started from the preference of commodity information browsing and the inherent property of online reviews, this paper focuses on the browsing data for statistical analysis and the interval distribution of consumer reviews based on the real data of 360buy which is the domestic large-scale B2C commerce website in China. Researches find that commodity information browsing time distribution on Internet is fragment and can be depicted by the fat tail effect. It also demonstrated that user's browsing patterns are related to the type of information and the displaying of the information, which means that the pieces of the picture and the length of the titles affecting the clicks rate. Reviews on the interval distribution can be depicted by the power-law function and there is a monotonically increasing relationship between power-exponent and the customers' concerns with the corresponding commodity, the higher the exponent, the higher the degree of consumer attention. The finding obtained some basic rules of the browsing mode and review model, which is of important significance for future research.
A Matching Algorithm Based on Association Rules in Ontology Based Publish/Subscribe System
LIU Shufen, CHI Meng, YAO Zhilin
2015, 24(1): 65-70.
Abstract(524) PDF(910)
In this paper, we introduce association rules into the event matching process, and propose a matching algorithm based on association rules in ontology-based publish/subscribe system. The algorithm discovers association rules from subscriptions, and then integrates these association rules into the arrived events. These integrated association rules guide the event matching process, which can greatly improve efficiency by reducing unnecessary matching processes. To solve the available problem of integrating the association rules into the ontology-based publish/subscribe system, we also show an approach of transforming the subscriptions into a format that can apply the data mining algorithm and an approach of integrating the association rule into the events. The results show that the algorithm achieves performance improvement of matching efficiency by using comparison.
Design of a Low-Power 20Gb/s 1:4 Demultiplexer in 0.18μm CMOS
2015, 24(1): 71-75.
Abstract(524) PDF(1351)
A low-power multi-phase clock 20Gb/s 1:4 Demultiplexer (DEMUX) without inductors is designed in 0.18μm Complementary metal oxide semiconductor (CMOS) process. The 1:4 DEMUX includes two 1:2 DEMUX cells, one 1/2 frequency divider cell, some data and clock buffers. A dynamic CMOS logic latch is used in the 1:2 DEMUX cell and a single clock dynamic-loading latch is used in the 1/2 frequency divider cell. These two kinds of logical structures not only reduce power dissipation and area, but also have an output rail-to-rail level. The rail-to-rail level can offer high noise margin and implement seamless connection without logic level conversion in system integration. The test results show that when the data rate of the input pseudorandom is 20Gb/s and the sequence length is 231-1, this 1:4 DEMUX can work well at a supply voltage of 2V. The output swing is 450mV with external 50 Ohm load and the die size is 0.475× 0.475 mm2. The chip power dissipation is 86mW, when four pads connect with a four-channel oscillograph.
Selectivity Estimation for String Predicates Based on Modified Pruned Count-Suffix Tree
LI Dong, ZHANG Qixu, LIANG Xiaochong, GUAN Jida, XU Yang
2015, 24(1): 76-82.
Abstract(516) PDF(921)
The accuracy of predicates selectivity estimation is one of the important factors affecting query optimization performance. State-of-art selectivity estimation algorithms for string predicates based on Pruned count-suffix tree (PST) often suffer severe underestimating and overestimating problems, thus their average relative errors are not good. We analyze the main causes of the underestimating and overestimating prob-lems, propose a novel Restricted pruned count-suffix tree (RPST) and a new pruning strategy. Based on these, we present the EKVI algorithm and the EMO algorithm which are extended from the KVI algorithm and the MO algorithm respectively. The experiments compare the EKVI algorithm and the EMO algorithm with the traditional KVI algorithm and the MO algorithm, and the results show that the average relative errors of our selectivity estimation algorithms are significantly better than the traditional selectivity estimation algorithms. The EMO algorithm is the best among these algorithms from the overall view.
Statistical Interconnect Crosstalk Noise Model and Analysis for Process Variations
LI Jianwei, DONG Gang, WANG Zeng, YE Xiaochun
2015, 24(1): 83-87.
Abstract(639) PDF(876)
When operating frequency is over several gigahertz, the effect of inductance plays an important role and should be included for accurate and speed crosstalk noise analysis. And for new generation IC (Integrated circuit) design tools, the crosstalk noise analysis tools should consider the influence of process variations. In this paper, we propose coupled RLC crosstalk noise distributed parameter model with capacitive load termination. And we develop the framework that how to use the crosstalk noise model for process variations. Our results show that compare with HSPICE, the critical data errors of proposed model are within 1% and the relative errors occurred between calculated values for process variations and HSPICE Monte Carlo simulation values are less than 5%. The key features of the new model include: (1) The impact of inductance on crosstalk noise is considered; (2) The model can be used for process variations analysis; (3) The model reflect the effects of load capacitance directly; (4) The numerical inversion of Laplace transform is introduced for improving speed. So the proposed model meets the needs of future IC design both in speed and accuracy.
XY-Type GPU Cache: Exploiting Spatial Localities in both X and Y Directions to Avoid Conflict Miss
2015, 24(1): 88-95.
Abstract(551) PDF(1012)
Cache has been introduced into many Graphics processing units (GPUs) to decrease the frequency of data transfer between high-performance computing units and low-speed long-latency external memory. The traditional index mapping scheme designed originally for CPU cache exploits only the spatial locality in address space. The access to graphics data always has region locality on the frame buffer: there are high spatial localities in both X and Y directions. It may generate more conflict misses on some limited cache lines, which eventually results in high cache miss ratio and a performance drop. Traditional CPU cache cannot be used directly in GPU. We propose a new conflict-avoiding GPU cache called XY -type cache with a new index mapping scheme, whose cache line indices are computed from both X and Y coordinates of pixels and the cache index distribution is consistent with the region locality on the frame buffer. Our evaluation results show that the proposed XY-type GPU cache can reduce cache miss ratio by 88% at most via scattering the cache accesses to all lines evenly, and can completely avoid the bad effect caused by frame resolution. Since the cache miss ratio in direct-mapped or 2-way set-associative structure is approximate to or even lower than that in fully-associative structure which is the best case in terms of lowering cache line conflicts, XY -type GPU cache can be designed with lower complexity and lower consumption power.
Hyponymy Graph Model for Word Semantic Similarity Measurement
WANG Junhua, ZUO Wanli, PENG Tao
2015, 24(1): 96-101.
Abstract(454) PDF(1351)
Measuring word semantic similarity is a generic problem with a broad range of applications such as ontology mapping, computational linguistics and artificial intelligence. Previous approaches to computing word semantic similarity did not consider concept occurrence frequency and word's sense number. This paper introduced Hyponymy graph, and based on which proposed a novel word semantic similarity model. For two words to be compared, we first retrieve their related concepts; then produce lowest common ancestor matrix and distance matrix between concepts; finally calculate distance-based similarity and information-based similarity, which are integrated to get final semantic similarity. The main contribution of our method is that both concept occurrence frequency and word's sense number are taken into account. This similarity measurement more closely fits with human rating and effectively simulates human thinking process. Our experimental results on benchmark dataset M&C and R&G with WordNet2.1 as platform demonstrate roughly 0.9%-1.2% improvements over existing best approaches.
Linear Canonical Transform Related Operators and Their Applications to Signal Analysis—— Part I: Fundamentals
WANG Xiaobo, ZHANG Qiliang, ZHOU You, QIAN Jing, ZOU Hongxing
2015, 24(1): 102-109.
Abstract(376) PDF(1233)
In recent years, the Linear canonical transform (LCT) has been recognized as a powerful tool in signal processing scenarios and optics. In this two-part paper, the issue of unitary and Hermitian operators and their duality concept associated with LCT are addressed. In Part I, based on the proposed operators, three LCT-related topics are derived. Firstly, the definitions of convolution and correlation operations in LCT domain are constructed. Secondly, a new transform which is named as Linear canonical mellin transform (LCMT) is introduced and then convolution and correlation operations in LCMT domain are given. Thirdly, the so-called joint canonical distributions which satisfy the canonical transform marginal are given. Part II concerns two applications of the theory derived in Part I for signal analysis.
A New DS-SS Signal Detection Trial Algorithm for False Alarm Rejection Based on Motion Parameters Constraint
CUI Wei, WANG Fengyun, LI Zhenzhen, JIN Qianyu, ZHANG Yunhan
2015, 24(1): 110-114.
Abstract(401) PDF(724)
To reject false alarm, a new detection trial algorithm named motion parameters constraint for Direct sequence spread spectrum (DS-SS) signal is proposed. A mathematical model based on uniform motion between two adjacent detections is established to analyze the overall performance of the proposed trial algorithm. Motion parameters constraint is applied to the conventional variable dwell time Tong trial algorithm. The expressions of the false alarm probability and detection probability are derived. Theoretical analysis and simulation results demonstrate that in comparison with the conventional M of N trial algorithm, Tong trial algorithm and Tong trial algorithm based on near neighbor constraint. The proposed trial algorithm can significantly reduce the false alarm probability and enhance the detection ability without increasing search speed and complexity.
An Adaptive SVD Method for Solving the Pass-Region Problem in S-Transform Time-Frequency Filters
YIN Baiqiang, HE Yigang, LI Bing, ZUO Lei, YUAN Lifen
2015, 24(1): 115-123.
Abstract(440) PDF(785)
S-transform (ST) is an excellent tool for time-frequency filter. There are two factors that influence filtering performance: Inverse s-transform (IST) algorithms and the pass-regions in time-frequency domain. A novel matrix IST algorithm is derived and an adaptive Singular value decomposition (SVD) method for solving the pass-region problem is proposed. The former can avoid reconstructing errors in time-frequency filtering; the latter is effective to distinguish the pass-region of signal from noise. Filter can be realized by removing the smaller singular values and keeping the larger singular values. An additive noise perturbation model is built in ST time-frequency domain and the effective rank of noise perturbation model based on matrix IST is analyzed. Simulation results indicate that the proposed SVD method can provide higher precision than the existing ones at low signal-to-noise ratio and does not need to compute the noise statistics property. Illustrative examples verify the effectiveness of proposed method.
Distributional Escape Time Algorithm Based on Generalized Fractal Sets in Cloud Environment
LIU Miao, LIU Shuai, FU Weina, ZHOU Jiantao
2015, 24(1): 124-127.
Abstract(370) PDF(804)
Since fractal is widely used in all science domains today, escape time algorithm, which is the most effective algorithm in drawing fractal figures, shows negatively when generation function is complex. In this paper, we improve classic escape time algorithm into cloud environment to improve its performance. At first, we provide a separation method of escape algorithm in cloud environment. Then we calculate complexity of the novel algorithm with a probability model based on allocation policy. At last, we use generalized fractal sets as experimental subjects to validate our conclusion. Experimental results show correctness and rapidness of the novel algorithm.
An Incremental Algorithm to Feature Selection in Decision Systems with the Variation of Feature Set
QIAN Wenbin, SHU Wenhao, YANG Bingru, ZHANG Changsheng
2015, 24(1): 128-133.
Abstract(470) PDF(899)
Feature selection is a challenging problem in pattern recognition and machine learning. In real-life applications, feature set in the decision systems may vary over time. There are few studies on feature selection with the variation of feature set. This paper focuses on this issue, an incremental feature selection algorithm in dynamic decision systems is developed based on dependency function. The incremental algorithm avoids some recomputations, rather than retrain the dynamic decision system as new one to compute the feature subset from scratch. We firstly employ an incremental manner to update the new dependency function, then we incorporate the calculated dependency function into the incremental feature selection algorithm. Compared with the direct (non-incremental) algorithm, the computational efficiency of the proposed algorithm is improved. The experimental results on different data sets from UCI show that the proposed algorithm is effective and efficient.
Filtering Chinese Image Spam Using Pseudo-OCR
XU Bin, LI Ruiguang, LIU Yashu, YAN Hanbing, LI Siyuan, ZHANG Honggang
2015, 24(1): 134-139.
Abstract(588) PDF(961)
For image spam filtering, the Optical character recognition(OCR) based methods often achieve a better performance due to the more complex structure of recognizing corresponding text. However, applying traditional OCR techniques usually introduced shortcomings like the expensive computational cost, vulnerability to image noises and artificial interferences, especially for Chinese image spam filtering. So, by optimizing recognition procedure of traditional OCR, we propose the idea of pseudo-OCR more suitable for Chinese image spam filtering. During which discriminating the potential image spam character features from ham ones is sufficient, instead of recognizing them. What's more, a novel Chinese key-point based character feature specific for pseudo-OCR is also devised and extracted using a carefully designed algorithm, which outperforms classic corner detection methods in finding such key-points. Experiment results show that our proposed system usually has a better performance than traditional OCR based method while maintaining a low false positive rate.
A Sequential Bayesian Algorithm for DOA Tracking in Time-Varying Environments
GAO Xunzhang, LI Xiang, Jason Filos, DAI Wei
2015, 24(1): 140-145.
Abstract(415) PDF(768)
This paper focuses on the Direction of arrival (DOA) tracking problem in dynamic environments where each source signal is modeled as a Gaussian process with time-varying mean and unknown covariance. In the presence of highly dynamic environment, benchmark algorithms usually have deteriorated performance. By treating the source signals as a function of the arrival angles, a sequential Bayesian tracking approach named Simultaneous angle-source update (SASU) is proposed based on the Maximum a posteriori (MAP) principle. The key feature of the proposed approach is to simultaneously update the arrival angles and the source signals in the Kalman filter step by converting the update process of the state vector into a joint optimization problem. An iterative Newton method to efficiently solve the joint optimization problem is proposed. The accuracy and robustness of the proposed SASU algorithm is demonstrated via simulations.
Construction of Type-II QC LDPC Codes Based on Perfect Cyclic Difference Set
ZHANG Lijun, LI Bing, CHENG Leelung
2015, 24(1): 146-151.
Abstract(474) PDF(1147)
Quasi-cyclic (QC) Low-density parity-check (LDPC) codes are constructed from combination of weight-0 (null matrix) and Weight-2 (W2) Circulant matrix (CM), which can be seen as a special case of the general type-II QC LDPC codes. The shift matrix of the codes is built on the basis of one integer sequence, called perfect Cyclic difference set (CDS), which guarantees the girth of the code at least six. Simulation results show that the codes can perform well in comparison with a variety of other LDPC codes. They have excellent error floor and decoding convergence characteristics.
A Doubly Parameterized Detector for Mismatched Signals
LIU Weijian, XIE Wenchong, ZHANG Qianping, LI Rongfeng, DUAN Keqing
2015, 24(1): 152-156.
Abstract(481) PDF(855)
In this paper, we consider the problem of adaptive multichannel signal detection in the presence of signal mismatch, and introduce a novel tunable detector, which is parameterized by two tunable parameters. It has the Constant false alarm rate (CFAR) property and covers Kelly's generalized likelihood ratio test (KGLRT), Adaptive matched filter (AMF), and Adaptive coherence estimator (ACE) as its three special cases. The novel detector controls the mismatched signal by adjusting two tunable parameters. Remarkably, it can achieve improved detection performance for matched signals, enhanced rejection of seriously mismatched signals, and better robustness to slightly mismatched signals than its natural competitors.
An Approach of Steganography in G.729 Bitstream Based on Matrix Coding and Interleaving
WU Zhijun, CAO Haijuan, LI Douzhe
2015, 24(1): 157-165.
Abstract(428) PDF(1083)
This paper proposes an approach of secure communication through the Internet based on the technology of speech information hiding. In this approach, the algorithm of embedding a 2.4Kbps low-bit-rate Mix-excitation linear prediction (MELP) speech into G.729 coding speech is presented by adapting the techniques of covering code and the interleaving. The parameters in G.729 source codec are analyzed in the Capability of noise tolerance (CNT) and selected to carry secret speech data because these parameters have less impact on the quality of being reconstructed speech. Experiment results show that the proposed steganography algorithm not only gained a high data embedding rate up to 2.4Kbps but also achieved better imperceptibility, which indicates that the algorithm can be used for high data embedding capacity hiding and can achieve good effect.
Some Properties of Correlation Function on Generalized Boolean Functions
ZHUO Zepeng, CHONG Jinfeng, WEI Shimin
2015, 24(1): 166-169.
Abstract(446) PDF(901)
The relationship among crosscorrelation functions of arbitrary four generalized Boolean functions is presented. Based on it, some properties of crosscorrelation function and autocorrelation function are given. The relationship between crosscorrelation function and generalized Walsh-Hadamard transform of functions is characterized. In the process we generalized old results and get new characterizations of cryptographic properties.
A Cluster-Based Opportunistic Multicast in Multi-hop Wireless Networks
ZHANG Haiyang
2015, 24(1): 170-175.
Abstract(366) PDF(734)
Multicast has become increasingly important in multi-hop wireless networks for such applications which deliver shared media and data to multiple receivers. However, the existing multicast protocols for wireless network have large control overhead, and cannot adequately take the advantage of broadcast communication mode in multicast structure for sharing the forwarding path. This paper proposes a Cluster-based opportunistic multicast (COM) algorithm, which constructs multicast tree structure based on clusters by utilizing a stability greedy algorithm to achieve the minimum total Cluster-based expected transmission count (CETX). In this multicast tree structure, the edge is created between the kernel sets respectively belonging to two clusters for improving the forwarding efficiency and the stability of this structure. In the process of data distribution, our multi-layer solution combines multicast and opportunistic routing to improve the efficiency of transmission. Simulation results show that COM scheme can achieve the less total number of sending packets, and higher stability than other topology-based solutions and opportunistic multicast.
A New DiffServ Edge Router with Controlled-UDP
XIAO Yang, QU Guangzhi, Kiseon Kim
2015, 24(1): 176-180.
Abstract(493) PDF(729)
The existing edge routers can not assign the link capacities between User datagram protocol (UDP) subscribers and Transmission control protocol (TCP) subscribers to ensure the multiple priorities traffics with Differentiate-Serve (DiffServ). To solve the problem, a new DiffServ edge router with Controlled-UDP (C-UDP) is proposed. The proposed DiffServ edge router can control the data rates of UDP and TCP subscribers according to their priorities. In the proposed edge router, there are multi-queues buffers controlled by TCP Active queue management (AQM) and UDP AQM algorithms to implement fair and stable link capacities. The proposed TCP AQM and UDP AQM algorithms achieve the network congestion control and DiffServ by operating AQM parameters on the stable conditions proposed by us. The dynamic simulation results demonstrate the proposed edge router for DiffServ network to be valid.
Research on Network Malicious Code Immune Based on Imbalanced Support Vector Machines
LI Peng, WANG Ruchuan
2015, 24(1): 181-186.
Abstract(451) PDF(804)
The malicious computer code immune system and the biological immune system are highly similar: both preserve the stability of the system in real time in a constantly changing environment. This similarity is exploited to design a malicious code immune system to solve the malware active defense problem. The malicious code immunization project is mainly composed of four major components: the immune information collection program, immune information filtering processing program, immunization information discrimination program, and immune response program. An imbalanced support vector machine method was applied to optimize output results of malicious code immunization, thereby removing uncertain malicious code immune outputs. This demonstrates in detail the feasibility of the imbalanced support vector machine method in optimizing the immunization program output data. We showed that an imbalanced support vector machines can optimize the outputs of the malicious code immune system by removing glitches from the outputs. As a result, the machine helps to determine the precise time of the emergence of the immune response.
A Novel Approach to Automatic Security Protocol Analysis Based on Authentication Event Logic
XIAO Meihua, MA Chenglin, DENG Chunyan, ZHU Ke
2015, 24(1): 187-192.
Abstract(592) PDF(1170)
Since security protocols form the cornerstones of modern secure networked systems, it is important to develop informative, accurate, and deployable approach for finding errors and proving that protocols meet their security requirements. We propose a novel approach to check security properties of cryptographic protocols using authentication event logic. Compared with logic of algorithm knowledge, authentication event logic guarantees that any well-typed protocol is robustly safe under attack while reasoning only about the actions of honest principals in the protocol. It puts no bound on the size of the principal and requires no state space enumeration and it is decidable. The types for protocol data provide some intuitive explanation of how the protocol works. Our approach has led us to the independent rediscovery of flaws in existing protocols and to the design of improved protocols.
Dynamic Decode-and-Forward Relaying with Partial CSIT and Optimal Time Allocation
SU Yinjie, JIANG Lingge, HE Chen
2015, 24(1): 193-198.
Abstract(569) PDF(655)
A Dynamic decode-and-forward (DDF) relaying protocol with partial Channel state information at the transmitter (CSIT) for a three-node half-duplex single-antenna network, which consists of a single source-destination pair and a relay, is proposed. A Diversity multiplexing tradeoff (DMT) analysis is presented, in which the DMT of the proposed protocol is derived in a closed-form and an adaptive time allocation strategy is also developed to achieve the optimal performance. It is shown that time allocation with partial CSIT significantly improves the achievable DMT for DDF relaying. Moreover, unlike the existing DDF protocol with CSIT, which is performed with the assumption of long-term power constraint, the proposed protocol can be generalized to the practical scenarios where a strict short-term power constraint is often imposed on, due to the environmental safety and interference prevention.
Improved Known-Key Distinguisher on Round-Reduced 3D Block Cipher
ZHA Daren, WU Shuang, WANG Qiongxiao
2015, 24(1): 199-204.
Abstract(609) PDF(810)
The 3D block cipher is a three-dimensional version of AES(Advanced encryption standard), which use a three-dimensional state and similar round functions. In this paper, we will use the known-key attack model proposed by Knudsen and Rijmen, and propose an improved distinguisher on 15 rounds of 3D, which has 22 rounds in total. The distinguisher is constructed using rebound techniques. In the previous distinguisher, only three inbound phases are merged. Here we propose a method to merge four inbound phases using gradual matching techniques. The improved complexity of this distinguisher is 2128 computations and 264 memory. The computational complexity is significantly reduced from 2200 in the previous attack.
SAR Image Despeckling Using Scale Mixtures of Gaussians in the Nonsubsampled Contourlet Domain
CHANG Xia, JIAO Licheng, LIU Fang, SHA Yuheng
2015, 24(1): 205-211.
Abstract(399) PDF(822)
The edge and contour details in SAR images are important for subsequent processing tasks. The multiscale geometric analysis method —— Nonsubsampled contourlet transform (NSCT) is able to capture the geometric information of SAR images effectively. Describing the aggregation behavior of the neighborhoods coefficients, the scale mixtures of Gaussians model has exhibited favorable performances. A novel SAR image despeckling method is presented by constructing the scale mixtures of Gaussians model of NSCT. This method models the SAR images using the multiscale and multidirection information in NSCT domain. The dependency relationship of NSCT neighborhoods coefficients are also taken into consideration in our model. The speckle noise coefficients are shrinkaged by statistical prior estimation based on SAR image model constructed. Experimental results demonstrate that our method is advantageous at directional information preservation and the speckle restraint.
The Accuracy Analysis of Zero-lag Correlation Coefficient of Dual-Polarization Radar
ZHANG Wenwen, YIN Fulian, JIA Yunfeng
2015, 24(1): 212-217.
Abstract(498) PDF(707)
Dual polarization technology is gradually promoted in our country after Doppler weather radar network. It extends the capabilities of conventional Doppler radar, and can directly detect the differential reflectivity factor and the zero-lag correlation coefficient and so on which are related with the micro-physics characteristics of precipitation. Considering that small error may lead to erroneous judgments when identify the particle phase state with the zero-lag correlation coefficient for its values is close to each other at different phase states and is between 0 and 1, the error mean value and mean square root of the zero-lag correlation coefficient are deduced at alternative transmission mode, and each parameter's effect to the precision is analyzed. It not only provides a useful polarization variable for better identifying hydrometer phase state and clutter recognition, but also provides an practical way to estimate the zero-lag correlation coefficient and relevant.
A Comprehensive Estimation Method for Kernel Function of Radar Signal Classifier
XU Jing, HE Minghao, HAN Jun, CHEN Changxiao
2015, 24(1): 218-222.
Abstract(492) PDF(1516)
The current electromagnetism environment is fast changing and levity, the methods for evaluation Suppont vector machine (SVM) kernel functions which are used in radar signal recognition can not suit it. So kernel space separate, stability and