留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

优先发表

优先发表栏目展示本刊经同行评议确定正式录用的文章,这些文章目前处在编校过程,尚未确定卷期及页码,但可以根据DOI进行引用。
显示方式:
Research on Virtual Coupled Train Control Method Based on GPC & VAPF
CAO Yuan, YANG Yaran, MA Lianchuan, WEN Jiakun
, doi: 10.1049/cje.2021.00.241
摘要:
In rail transit systems, improving transportation efficiency has become a research hotspot. In recent years, a method of train control system based on virtual coupling has attracted the attention of many scholars. And the train operation control method is not only the key to realize the virtual coupling train operation control system but also the key to prevent accidents. Therefore, based on the existing research, a virtual coupled train dynamics model with nonlinear dynamics is established. Then, the recursive least square method based on the train running process data is used to identify the model parameters of the nonlinear dynamics virtual coupling train coupling process, and it is applied to the variable parameter artificial potential field (VAPF) to identify the parameters. A fusion controller based on feature-based generalized model prediction (GPC) and VAPF is used to control the virtual coupled train and prevent collision. Finally, a section of Beijing-Shanghai high-speed railway is taken as the background to verify the effectiveness of the proposed method. In rail transit systems, improving transportation efficiency has become a research hotspot. In recent years, a method of train control system based on virtual coupling has attracted the attention of many scholars. And the train operation control method is not only the key to realize the virtual coupling train operation control system but also the key to prevent accidents. Therefore, based on the existing research, a virtual coupled train dynamics model with nonlinear dynamics is established. Then, the recursive least square method based on the train running process data is used to identify the model parameters of the nonlinear dynamics virtual coupling train coupling process, and it is applied to the variable parameter artificial potential field (VAPF) to identify the parameters. A fusion controller based on feature-based generalized model prediction (GPC) and VAPF is used to control the virtual coupled train and prevent collision. Finally, a section of Beijing-Shanghai high-speed railway is taken as the background to verify the effectiveness of the proposed method.
Time Optimal Trajectory Planning Algorithm for Robotic Manipulator Based on Locally Chaotic Particle Swarm Optimization
DU Yuxiao, CHEN Yihang
, doi: 10.1049/cje.2021.00.373
摘要:
Optimal trajectory planning is a fundamental problem in the area of robotic research. On the time-optimal trajectory planning problem during the motion of a robotic arm, the method based on segmented polynomial interpolation function with a locally chaotic particle swarm optimization (LCPSO) algorithm is proposed in this paper. While completing the convergence in the early or middle part of the search, the algorithm steps forward on the problem of local convergence of traditional particle swarm optimization (PSO) and improved learning factor PSO (IFPSO) algorithms. Finally, simulation experiments are executed in joint space to obtain the optimal time and smooth motion trajectory of each joint, which shows that the method can effectively shorten the running time of the robotic manipulator and ensure the stability of the motion as well. Optimal trajectory planning is a fundamental problem in the area of robotic research. On the time-optimal trajectory planning problem during the motion of a robotic arm, the method based on segmented polynomial interpolation function with a locally chaotic particle swarm optimization (LCPSO) algorithm is proposed in this paper. While completing the convergence in the early or middle part of the search, the algorithm steps forward on the problem of local convergence of traditional particle swarm optimization (PSO) and improved learning factor PSO (IFPSO) algorithms. Finally, simulation experiments are executed in joint space to obtain the optimal time and smooth motion trajectory of each joint, which shows that the method can effectively shorten the running time of the robotic manipulator and ensure the stability of the motion as well.
Z9
Prediction of Protein Subcellular Localization Based on Microscopic Images via Multi-Task Multi-Instance Learning
ZHANG Pingyue, ZHANG Mengtian, LIU Hui, YANG Yang
, doi: 10.1049/cje.2020.00.330
摘要:
Protein localization information is essential for understanding protein functions and their roles in various biological processes. The image-based prediction methods of protein subcellular localization have emerged in recent years because of the advantages of microscopic images in revealing spatial expression and distribution of proteins in cells. However, the image-based prediction is a very challenging task, due to the multi-instance nature of the task and low quality of images. In this paper, we propose a multi-task learning strategy and mask generation to enhance the prediction performance. Furthermore, we also investigate effective multi-instance learning schemes. We collect a large-scale dataset from the Human Protein Atlas database, and the experimental results show that the proposed multi-task multi-instance learning model outperforms both single-instance learning and common multi-instance learning methods by large margins. Protein localization information is essential for understanding protein functions and their roles in various biological processes. The image-based prediction methods of protein subcellular localization have emerged in recent years because of the advantages of microscopic images in revealing spatial expression and distribution of proteins in cells. However, the image-based prediction is a very challenging task, due to the multi-instance nature of the task and low quality of images. In this paper, we propose a multi-task learning strategy and mask generation to enhance the prediction performance. Furthermore, we also investigate effective multi-instance learning schemes. We collect a large-scale dataset from the Human Protein Atlas database, and the experimental results show that the proposed multi-task multi-instance learning model outperforms both single-instance learning and common multi-instance learning methods by large margins.
Z17
Variance-SNR Based Noise Suppression on Linear Canonical Choi-Williams Distribution of LFM Signals
ZHANG Zhichao
, doi: 10.1049/cje.2020.00.367
摘要:
By solving the existing expectation-signal-to-noise ratio (expectation-SNR) based inequality model of the closed-form instantaneous cross-correlation function type of Choi-Williams distribution (CICFCWD), the linear canonical transform (LCT) free parameters selection strategies obtained are usually unsatisfactory. Since the second-order moment variance outperforms the first-order moment expectation in accurately characterizing output SNRs, this paper uses the variance analysis technique to improve parameters selection strategies. The CICFCWD’s average variance of deterministic signals embedded in additive zero-mean stationary circular Gaussian noise processes is first obtained. Then the so-called variance-SNRs are defined and applied to model a variance-SNR based inequality. A stronger inequalities system is also formulated by integrating expectation-SNR and variance-SNR based inequality models. Finally, a direct application of the system in noisy one-component and bi-component linear frequency-modulated signals detection is studied. Analytical algebraic constraints on LCT free parameters newly derived seem more accurate than the existing ones, achieving better noise suppression effects. Our methods have potential applications in optical, radar, communication and medical signal processing. By solving the existing expectation-signal-to-noise ratio (expectation-SNR) based inequality model of the closed-form instantaneous cross-correlation function type of Choi-Williams distribution (CICFCWD), the linear canonical transform (LCT) free parameters selection strategies obtained are usually unsatisfactory. Since the second-order moment variance outperforms the first-order moment expectation in accurately characterizing output SNRs, this paper uses the variance analysis technique to improve parameters selection strategies. The CICFCWD’s average variance of deterministic signals embedded in additive zero-mean stationary circular Gaussian noise processes is first obtained. Then the so-called variance-SNRs are defined and applied to model a variance-SNR based inequality. A stronger inequalities system is also formulated by integrating expectation-SNR and variance-SNR based inequality models. Finally, a direct application of the system in noisy one-component and bi-component linear frequency-modulated signals detection is studied. Analytical algebraic constraints on LCT free parameters newly derived seem more accurate than the existing ones, achieving better noise suppression effects. Our methods have potential applications in optical, radar, communication and medical signal processing.
Ergodic Capacity of NOMA-based Overlay Cognitive Integrated Satellite-UAV-Terrestrial Networks
GUO Kefeng, LIU Rui, DONG Chao, AN Kang, HUANG Yuzhen, ZHU Shibing
, doi: 10.1049/cje.2021.00.316
摘要:
Satellite communication has become a popular study topic owing to its inherent advantages of high capacity, large coverage, and no terrain restrictions. Hence, it can be combined with terrestrial communication to overcome the shortcomings of current wireless communication, such as limited coverage and high destructibility. Over recent years, the integrated satellite-unmanned aerial vehicle-terrestrial networks (IS-UAV-TNs) have aroused tremendous interests to effectively reduce the transmission latency and enhance quality-of-service (QoS) with improved spectrum efficiency. However, the rapidly growing access demands and conventional spectrum allocation scheme lead to the shortage of spectrum resources. To tackle the mentioned challenge, the non-orthogonal multiple access (NOMA) scheme and cognitive radio technique are utilized in IS-UAV-TN, which can improve spectrum utilization. In our paper, the transmission capacity of a NOMA-enabled IS-UAV-TN under overlay mode is discussed, specifically, we derive the closed-form expressions of ergodic capacity for both primary and secondary networks. Besides, simulation results are provided to demonstrate the validity of the mathematical derivations and indicate the influences of critical system parameters on transmission performance. Furthermore, the orthogonal multiple access (OMA)-based scheme is compared with our NOMA-based scheme as a benchmark, which illustrates that our proposed scheme has better performance. Satellite communication has become a popular study topic owing to its inherent advantages of high capacity, large coverage, and no terrain restrictions. Hence, it can be combined with terrestrial communication to overcome the shortcomings of current wireless communication, such as limited coverage and high destructibility. Over recent years, the integrated satellite-unmanned aerial vehicle-terrestrial networks (IS-UAV-TNs) have aroused tremendous interests to effectively reduce the transmission latency and enhance quality-of-service (QoS) with improved spectrum efficiency. However, the rapidly growing access demands and conventional spectrum allocation scheme lead to the shortage of spectrum resources. To tackle the mentioned challenge, the non-orthogonal multiple access (NOMA) scheme and cognitive radio technique are utilized in IS-UAV-TN, which can improve spectrum utilization. In our paper, the transmission capacity of a NOMA-enabled IS-UAV-TN under overlay mode is discussed, specifically, we derive the closed-form expressions of ergodic capacity for both primary and secondary networks. Besides, simulation results are provided to demonstrate the validity of the mathematical derivations and indicate the influences of critical system parameters on transmission performance. Furthermore, the orthogonal multiple access (OMA)-based scheme is compared with our NOMA-based scheme as a benchmark, which illustrates that our proposed scheme has better performance.
Vibration-based fault diagnosis for railway point machines using VMD and multiscale fluctuation-based dispersion entropy
SUN Yongkui, CAO Yuan, LI Peng, XIE Guo, WEN Tao, SU Shuai
, doi: 10.1049/cje.2022.00.075
摘要:
As one of the most important railway signaling equipment, railway point machines undertake the major task of ensuring train operation safety. Thus fault diagnosis for railway point machines becomes a hot topic. Considering the advantage of the anti-interference characteristics of vibration signals, this paper proposes an novel intelligent fault diagnosis method for railway point machines based on vibration signals. A feature extraction method combining variational mode decomposition (VMD) and multiscale fluctuation-based dispersion entropy (MFDE) is developed, which is verified a more effective tool for feature selection. Then, a two-stage feature selection method based on Fisher discrimination and ReliefF is proposed, which is validated more powerful than signle feature selection methods. Finally, support vector machine (SVM) is utilized for fault diagnosis. Experiment comparisons show that the proposed method performs best. The diagnosis accuracies of normal-reverse and reverse-normal switching processes reach 100% and 96.57% respectively. Especially, it is a try to use new means for fault diagnosis on railway point machines, which can also provide references for similar fields. As one of the most important railway signaling equipment, railway point machines undertake the major task of ensuring train operation safety. Thus fault diagnosis for railway point machines becomes a hot topic. Considering the advantage of the anti-interference characteristics of vibration signals, this paper proposes an novel intelligent fault diagnosis method for railway point machines based on vibration signals. A feature extraction method combining variational mode decomposition (VMD) and multiscale fluctuation-based dispersion entropy (MFDE) is developed, which is verified a more effective tool for feature selection. Then, a two-stage feature selection method based on Fisher discrimination and ReliefF is proposed, which is validated more powerful than signle feature selection methods. Finally, support vector machine (SVM) is utilized for fault diagnosis. Experiment comparisons show that the proposed method performs best. The diagnosis accuracies of normal-reverse and reverse-normal switching processes reach 100% and 96.57% respectively. Especially, it is a try to use new means for fault diagnosis on railway point machines, which can also provide references for similar fields.
New Construction of Quadriphase Golay Complementary Pairs
LI Guojun, ZENG Fanxin, YE Changrong
, doi: 10.1049/cje.2021.00.215
摘要:
Based on an arbitrarily-chosen binary Golay complementary pair (BGCP) \begin{document}$ ({\boldsymbol{c}},{\boldsymbol{d}}) $\end{document} of even length \begin{document}${\boldsymbol{N}}$\end{document}, first of all, construct quadriphase sequences \begin{document}$ {\boldsymbol{a}} $\end{document} and \begin{document}$ {\boldsymbol{b}} $\end{document} of length \begin{document}${\boldsymbol{N}}$\end{document} by weighting addition and difference of the aforementioned pair with different weights, respectively. Secondly, new quadriphase sequence \begin{document}$ {\boldsymbol{u}} $\end{document} is given by interleaving three sequences \begin{document}$ {\boldsymbol{d}} $\end{document}, \begin{document}$ {\boldsymbol{a}} $\end{document}, and \begin{document}$ -{\boldsymbol{c}} $\end{document}, and similarly, the sequence \begin{document}$ {\boldsymbol{v}} $\end{document} is acquired from three sequences \begin{document}$ {\boldsymbol{d}} $\end{document}, \begin{document}$ {\boldsymbol{b}} $\end{document}, and \begin{document}$ {\boldsymbol{c}} $\end{document}. Thus, the resultant pair \begin{document}$ ({\boldsymbol{u}},{\boldsymbol{v}}) $\end{document} is the quadriphase Golay complementary pair (QGCP) of length \begin{document}${\boldsymbol{3N}}$\end{document}. The QGCPs play a fairly important role in communications, radar, and so on. Based on an arbitrarily-chosen binary Golay complementary pair (BGCP) $ ({\boldsymbol{c}},{\boldsymbol{d}}) $ of even length ${\boldsymbol{N}}$, first of all, construct quadriphase sequences $ {\boldsymbol{a}} $ and $ {\boldsymbol{b}} $ of length ${\boldsymbol{N}}$ by weighting addition and difference of the aforementioned pair with different weights, respectively. Secondly, new quadriphase sequence $ {\boldsymbol{u}} $ is given by interleaving three sequences $ {\boldsymbol{d}} $, $ {\boldsymbol{a}} $, and $ -{\boldsymbol{c}} $, and similarly, the sequence $ {\boldsymbol{v}} $ is acquired from three sequences $ {\boldsymbol{d}} $, $ {\boldsymbol{b}} $, and $ {\boldsymbol{c}} $. Thus, the resultant pair $ ({\boldsymbol{u}},{\boldsymbol{v}}) $ is the quadriphase Golay complementary pair (QGCP) of length ${\boldsymbol{3N}}$. The QGCPs play a fairly important role in communications, radar, and so on.
EAODroid: Android Malware Detection based on Enhanced API Order
HUANG Lu, XUE Jingfeng, WANG Yong, QU Dacheng, CHEN Junbao, ZHANG Nan, ZHANG Li
, doi: 10.1049/cje.2021.00.451
摘要:
The development of smart mobile devices not only brings convenience to people’s lives but also provides a breeding ground for Android malware. The sharp increasing malware poses a disastrous threat to personal privacy in the information age. Based on the fact that malware heavily resorts to system APIs to perform its malicious actions, there has been a variety of API-based detection approaches. Most of them do not consider the relationship between APIs. We contribute a new approach based on Enhanced API Order for Android malware detection, named EAODroid. EAODroid learns the similarity of system APIs from a large number of API sequences and groups similar APIs into clusters. The extracted API clusters are further used to enhance the original API calls executed by an app to characterize behaviors and perform classification. We perform multi-dimensional experiments to evaluate EAODroid on three datasets with ground truth. We compare with many state-of-the-art works, showing that EAODroid achieves effective performance in Android malware detection. The development of smart mobile devices not only brings convenience to people’s lives but also provides a breeding ground for Android malware. The sharp increasing malware poses a disastrous threat to personal privacy in the information age. Based on the fact that malware heavily resorts to system APIs to perform its malicious actions, there has been a variety of API-based detection approaches. Most of them do not consider the relationship between APIs. We contribute a new approach based on Enhanced API Order for Android malware detection, named EAODroid. EAODroid learns the similarity of system APIs from a large number of API sequences and groups similar APIs into clusters. The extracted API clusters are further used to enhance the original API calls executed by an app to characterize behaviors and perform classification. We perform multi-dimensional experiments to evaluate EAODroid on three datasets with ground truth. We compare with many state-of-the-art works, showing that EAODroid achieves effective performance in Android malware detection.
A Directly Readable Halftone Multifunctional Color QR Code
HUANG Yuan, CAO Peng, LV Guangwu
, doi: 10.1049/cje.2021.00.366
摘要:
Color quick response (QR) code is an important direction for the future development of QR code, which has become a research hotspot due to the additional functional characteristics of its colors as the wide application of QR code technology. The existing color QR code has solved the problem of information storage capacity, but it requires an enormous hardware and software support system, making how to achieve its direct readability an urgent issue. This paper proposes a novel color QR code that combines multiple types of different identification information. This code combines multiplexing and color-coding technology to present the publicly encoded information (such as advertisements, public query information) as plain code, and traceability, blockchain, anti-counterfeiting authentication and other information concealed in the form of hidden code. We elaborate the basic principle of this code, construct its mathematical model and supply a set of algorithm design processes, which breakthrough key technology of halftone printout. The experimental results show that the proposed color QR code realizes the multi-code integration and can be read directly without special scanning equipment, which has unique advantages in the field of printing anti-counterfeiting labels. Color quick response (QR) code is an important direction for the future development of QR code, which has become a research hotspot due to the additional functional characteristics of its colors as the wide application of QR code technology. The existing color QR code has solved the problem of information storage capacity, but it requires an enormous hardware and software support system, making how to achieve its direct readability an urgent issue. This paper proposes a novel color QR code that combines multiple types of different identification information. This code combines multiplexing and color-coding technology to present the publicly encoded information (such as advertisements, public query information) as plain code, and traceability, blockchain, anti-counterfeiting authentication and other information concealed in the form of hidden code. We elaborate the basic principle of this code, construct its mathematical model and supply a set of algorithm design processes, which breakthrough key technology of halftone printout. The experimental results show that the proposed color QR code realizes the multi-code integration and can be read directly without special scanning equipment, which has unique advantages in the field of printing anti-counterfeiting labels.
Analysis of Capacitance Characteristics of Light-Controlled Electrostatic Conversion Device
LIU Yujie, WANG Yang, JIN Xiangliang, PENG Yan, LUO Jun, YANG Jun
, doi: 10.1049/cje.2021.00.272
摘要:
In recent years, converting environmental energy into electrical energy to meet the needs of modern society for clean and sustainable energy has become a research hotspot. Electrostatic energy is a pollution-free environmental energy source. The use of electrostatic conversion devices to convert electrostatic energy into electrical energy has been proven to be a feasible solution to meet sustainable development. This paper proposes a light-controlled electrostatic conversion device (LCECD). When static electricity comes, an avalanche breakdown occurs inside the LCECD and a low resistance path is generated to clamp the voltage, thereby outputting a smooth square wave of voltage and current. Experiments have proved that LCECD can convert 30kV electrostatic pulses into usable electrical energy for the normal operation of the back-end LED lights. In addition, the LCECD will change the parasitic capacitance after being exposed to light. For different wavelengths of light, the parasitic capacitance exhibited by the device will also be different. The smaller the parasitic capacitance of the LCECD, the higher the efficiency of its electrostatic conversion. This is of great significance to the design of electrostatic conversion devices in the future. In recent years, converting environmental energy into electrical energy to meet the needs of modern society for clean and sustainable energy has become a research hotspot. Electrostatic energy is a pollution-free environmental energy source. The use of electrostatic conversion devices to convert electrostatic energy into electrical energy has been proven to be a feasible solution to meet sustainable development. This paper proposes a light-controlled electrostatic conversion device (LCECD). When static electricity comes, an avalanche breakdown occurs inside the LCECD and a low resistance path is generated to clamp the voltage, thereby outputting a smooth square wave of voltage and current. Experiments have proved that LCECD can convert 30kV electrostatic pulses into usable electrical energy for the normal operation of the back-end LED lights. In addition, the LCECD will change the parasitic capacitance after being exposed to light. For different wavelengths of light, the parasitic capacitance exhibited by the device will also be different. The smaller the parasitic capacitance of the LCECD, the higher the efficiency of its electrostatic conversion. This is of great significance to the design of electrostatic conversion devices in the future.
A Novel Wideband Wilkinson Pulse Combiner with Enhanced Low Frequency Isolation
WANG Zitong, WU Qi, SU Donglin
, doi: 10.1049/cje.2021.00.429
摘要:
A novel Wilkinson pulse combiner(WPC) is proposed for the combination of Gaussian pulse signals. The WPC requires a very wide bandwidth, small size and high port isolation. To improve the operating bandwidth, the design adopts the form of eight-section WPC. Eight capacitors are connected in series with the isolating resistors of each section. After capacitive loading, isolation between WPC input ports is significantly improved at low frequency. Consequently, the operating bandwidth of WPC has been increased from 13:1 to 31:1. Compared with the conventional Wilkinson combiner with the same bandwidth, the proposed WPC reduces the size by 40%. In addition, all the ports are well impedance matched and the insertion loss in the operating frequency band is less than 0.5dB. To verify the feasibility of the design, a prototype was fabricated and measured. Experiment shows that the novel WPC is more advantageous to generate dual-Gaussian pulse signals. A novel Wilkinson pulse combiner(WPC) is proposed for the combination of Gaussian pulse signals. The WPC requires a very wide bandwidth, small size and high port isolation. To improve the operating bandwidth, the design adopts the form of eight-section WPC. Eight capacitors are connected in series with the isolating resistors of each section. After capacitive loading, isolation between WPC input ports is significantly improved at low frequency. Consequently, the operating bandwidth of WPC has been increased from 13:1 to 31:1. Compared with the conventional Wilkinson combiner with the same bandwidth, the proposed WPC reduces the size by 40%. In addition, all the ports are well impedance matched and the insertion loss in the operating frequency band is less than 0.5dB. To verify the feasibility of the design, a prototype was fabricated and measured. Experiment shows that the novel WPC is more advantageous to generate dual-Gaussian pulse signals.
Code-Based Conjunction Obfuscation
ZHANG Zheng, ZHANG Zhuoran, ZHANG Fangguo
, doi: 10.1049/cje.2020.00.377
摘要:
A conjunction can be viewed as a pattern-matching with wildcards. An input string of length n matches a pattern of the same length if and only if it is same as the pattern for all non-wildcard positions. Since 2013, there are abundant works of conjunction obfuscations which are based on Generic Group Model, LWE assumption, LPN assumption, et al. After obfuscation, any adversary can not find the pattern or a accepting input from the obfuscated program. In this work, we propose a conjunction obfuscation from the General Decoding Problem. In addition to satisfying the distributional virtual black-box security, our obfuscation also achieve the strong functionality preservation which solves the open problem in the work of Bartusek et al. It means that we construct a conjunction obfuscation with simultaneous correct from a standard assumption. The conjunction obfuscation can resist the information set decoding attack and the structured error attack with some parameter constraints. A conjunction can be viewed as a pattern-matching with wildcards. An input string of length n matches a pattern of the same length if and only if it is same as the pattern for all non-wildcard positions. Since 2013, there are abundant works of conjunction obfuscations which are based on Generic Group Model, LWE assumption, LPN assumption, et al. After obfuscation, any adversary can not find the pattern or a accepting input from the obfuscated program. In this work, we propose a conjunction obfuscation from the General Decoding Problem. In addition to satisfying the distributional virtual black-box security, our obfuscation also achieve the strong functionality preservation which solves the open problem in the work of Bartusek et al. It means that we construct a conjunction obfuscation with simultaneous correct from a standard assumption. The conjunction obfuscation can resist the information set decoding attack and the structured error attack with some parameter constraints.
Graph Hilbert Neural Network
LIU Feng, YANG Chengyi, ZHOU Aimin
, doi: 10.1049/cje.2022.00.096
摘要:
We present graph Hilbert neural network (GHNN), a novel framework of graph neural networks based on graph signal processing (GSP) theory, which is different from the previous method based on convolution theorem. Graph Hilbert transform (GHT) can explain the emergence of complex eigenvalues and complex eigenvectors, which provides a theoretical basis for convolution operation on digraphs. GHT can be expressed by the form of polynomial filter because of its linear shift invariant (LSI) property and the definition in spectral domain, which is applied to construct the layers of graph neural network. The graph Laplacian matrix is adopted as the graph shift operator in the function of LSI filter to realize the property of localization. To make better use of both low-frequency and high-frequency information, we design a two-channel filter bank to perform low-pass filtering and high-pass filtering. Experiments on three benchmark datasets show that the proposed GHNN outperforms previous spectral graph CNNs on the task of graph-based semi-supervised classification. We present graph Hilbert neural network (GHNN), a novel framework of graph neural networks based on graph signal processing (GSP) theory, which is different from the previous method based on convolution theorem. Graph Hilbert transform (GHT) can explain the emergence of complex eigenvalues and complex eigenvectors, which provides a theoretical basis for convolution operation on digraphs. GHT can be expressed by the form of polynomial filter because of its linear shift invariant (LSI) property and the definition in spectral domain, which is applied to construct the layers of graph neural network. The graph Laplacian matrix is adopted as the graph shift operator in the function of LSI filter to realize the property of localization. To make better use of both low-frequency and high-frequency information, we design a two-channel filter bank to perform low-pass filtering and high-pass filtering. Experiments on three benchmark datasets show that the proposed GHNN outperforms previous spectral graph CNNs on the task of graph-based semi-supervised classification.
Security Analysis for SCKHA Algorithm: Stream Cipher Algorithm Based on Key Hashing Technique
Souror Samia, El-Fishawy Nawal, Badawy Mohammed
, doi: 10.1049/cje.2021.00.383
摘要:
The strength of any cryptographic algorithm is mostly based on the difficulty of its encryption key.However, the larger size of the shared key the more computational operations and processing time for cryptographic algorithms. To avoid increasing the key size and keep its secrecy, we must hide it. The authors proposed a stream cipher algorithm that can hide the symmetric key[1] through hashing and splitting techniques. This paper aims to measure security analysis and performance assessment for this algorithm. This algorithm is compared with three of the commonly used stream cipher algorithms: RC4, Rabbit, and Salsa20 in terms of execution time and throughput. This comparison has been conducted with different data types as audio, image, text, docs, and pdf. Experiments proved the superiority of SCKHA algorithm over both Salsa20 and Rabbit algorithms. Also, results proved the difficulty to recover the secret key for SCKHA algorithm. Although RC4 has a lower encryption time than SCKHA, it is not recommended for use because of its vulnerabilities. Security factors that affect the performance as avalanche effect, correlation analysis, histogram analysis, and Shannon information entropy are highlighted. Also, the ciphertext format of the algorithm gives it the ability to search over encrypted data. The strength of any cryptographic algorithm is mostly based on the difficulty of its encryption key.However, the larger size of the shared key the more computational operations and processing time for cryptographic algorithms. To avoid increasing the key size and keep its secrecy, we must hide it. The authors proposed a stream cipher algorithm that can hide the symmetric key[1] through hashing and splitting techniques. This paper aims to measure security analysis and performance assessment for this algorithm. This algorithm is compared with three of the commonly used stream cipher algorithms: RC4, Rabbit, and Salsa20 in terms of execution time and throughput. This comparison has been conducted with different data types as audio, image, text, docs, and pdf. Experiments proved the superiority of SCKHA algorithm over both Salsa20 and Rabbit algorithms. Also, results proved the difficulty to recover the secret key for SCKHA algorithm. Although RC4 has a lower encryption time than SCKHA, it is not recommended for use because of its vulnerabilities. Security factors that affect the performance as avalanche effect, correlation analysis, histogram analysis, and Shannon information entropy are highlighted. Also, the ciphertext format of the algorithm gives it the ability to search over encrypted data.
Towards Order-preserving and Zero-copy Communication on Shared Memory for Large Scale Simulation
LI Xiuhe, SHEN Yang, LIN Zhongwei, ZHAO Shunkai, SHI Qianqian, DAI Shaoqi
, doi: 10.1049/cje.2021.00.393
摘要:
Parallel simulation generally needs efficient, reliable and order-preserving communication. In this article, a zero-copy, reliable and order-preserving intra-node message passing approach ZeROshm is proposed, and it partitions shared memory into segments assigned to processes for receiving messages. Each segment consists of two levels of index L1 and L2 that recordes the order of messages in the host segment, and the processes also read from and write to the segments directly according to the indexes, thereby eliminating allocating and copying buffers. As experimental results show, ZeROshm exhibits nearly equivalent performance to MPI for small message and superior performance for large message - ZeROshm costs less time by 43%, 40% and 55% respectively in pure communication, communication with contention and real Phold simulation within a single node. In hybrid environment, the combination of ZeROshm and MPI also shorten the execution time of Phold simulation by about 42% compared to pure MPI. Parallel simulation generally needs efficient, reliable and order-preserving communication. In this article, a zero-copy, reliable and order-preserving intra-node message passing approach ZeROshm is proposed, and it partitions shared memory into segments assigned to processes for receiving messages. Each segment consists of two levels of index L1 and L2 that recordes the order of messages in the host segment, and the processes also read from and write to the segments directly according to the indexes, thereby eliminating allocating and copying buffers. As experimental results show, ZeROshm exhibits nearly equivalent performance to MPI for small message and superior performance for large message - ZeROshm costs less time by 43%, 40% and 55% respectively in pure communication, communication with contention and real Phold simulation within a single node. In hybrid environment, the combination of ZeROshm and MPI also shorten the execution time of Phold simulation by about 42% compared to pure MPI.
On UAV Serving Node Deployment for Temporary Coverage in Forest Environment: A Hierarchical Deep Reinforcement Learning Approach
WANG Li, WU Xuewei, WANG Yanhui, XIAO Zhe, LI Liang, FEI Aiguo
, doi: 10.1049/cje.2021.00.326
摘要:
Unmanned aerial vehicles (UAVs) can be effectively used as serving stations in emergency communications because of their free movements, strong flexibility, and dynamic coverage. In this paper, we propose a coordinated multiple points (CoMP) based UAV deployment framework to improve system average ergodic rate, by using the fuzzy C-means (FCM) algorithm to cluster the ground users and considering exclusive forest channel models for the two cases, i.e., associated with a broken base station (BS) or an available one. In addition, we derive the upper bound of the average ergodic rate to reduce computational complexity. Since deep reinforcement learning (DRL) can deal with the complex forest environment while the large action and state space of UAVs leads to slow convergence, we use a ratio cut method to divide UAVs into groups and propose a hierarchical clustering DRL (HC-DRL) approach with quick convergence to optimize the UAV deployment. Simulation results show that the proposed framework can effectively reduce the complexity, and outperforms the counterparts in accelerating the convergence speed. Unmanned aerial vehicles (UAVs) can be effectively used as serving stations in emergency communications because of their free movements, strong flexibility, and dynamic coverage. In this paper, we propose a coordinated multiple points (CoMP) based UAV deployment framework to improve system average ergodic rate, by using the fuzzy C-means (FCM) algorithm to cluster the ground users and considering exclusive forest channel models for the two cases, i.e., associated with a broken base station (BS) or an available one. In addition, we derive the upper bound of the average ergodic rate to reduce computational complexity. Since deep reinforcement learning (DRL) can deal with the complex forest environment while the large action and state space of UAVs leads to slow convergence, we use a ratio cut method to divide UAVs into groups and propose a hierarchical clustering DRL (HC-DRL) approach with quick convergence to optimize the UAV deployment. Simulation results show that the proposed framework can effectively reduce the complexity, and outperforms the counterparts in accelerating the convergence speed.
An Adaptive Interactive Multiple-Model Algorithm Based on End-to-End Learning
ZHU Hongfeng, XIONG Wei, CUI Yaqi
, doi: 10.1049/cje.2021.00.442
摘要:
The interactive multiple-model (IMM) is a popular choice for target tracking. However, to design transition probability matrices (TPMs) for IMMs is a considerable challenge with less prior knowledge, and the TPM is one of the fundamental factors influencing IMM performance. IMMs with inaccurate TPMs can make it difficult to monitor target maneuvers and bring poor tracking results. To address this challenge, we propose an adaptive IMM algorithm based on end-to-end learning. In our method, the neural network is utilized to estimate TPMs in real-time based on partial parameters of IMM in each time step, resulting in a generalized recurrent neural network. Through end-to-end learning in the tracking task, the dataset cost of the proposed algorithm is smaller and the generalizability is stronger. Simulation and automatic dependent surveillance-broadcast (ADS-B) tracking experiment results show that the proposed algorithm has better tracking accuracy and robustness with less prior knowledge. The interactive multiple-model (IMM) is a popular choice for target tracking. However, to design transition probability matrices (TPMs) for IMMs is a considerable challenge with less prior knowledge, and the TPM is one of the fundamental factors influencing IMM performance. IMMs with inaccurate TPMs can make it difficult to monitor target maneuvers and bring poor tracking results. To address this challenge, we propose an adaptive IMM algorithm based on end-to-end learning. In our method, the neural network is utilized to estimate TPMs in real-time based on partial parameters of IMM in each time step, resulting in a generalized recurrent neural network. Through end-to-end learning in the tracking task, the dataset cost of the proposed algorithm is smaller and the generalizability is stronger. Simulation and automatic dependent surveillance-broadcast (ADS-B) tracking experiment results show that the proposed algorithm has better tracking accuracy and robustness with less prior knowledge.
Zero-Cerd: A Self-blindable Anonymous Authentication System Based on Blockchain
YANG Kunwei, YANG Bo, WANG Tao, ZHOU Yanwei
, doi: 10.1049/cje.2022.00.047
摘要:
While the internet of things brings convenience to people’s lives, it will also bring people hidden worries about data security. As an important barrier to protect data security, identity authentication is widely used in the internet of things. However, it is necessary to protect users' identity privacy while authenticating their identity. Anonymous authentication technology is often used to solve the contradiction between legitimacy and privacy in the authentication process. The existing anonymous authentication scheme has many problems in practical application such as the inability to achieve complete anonymity, the high computational complexity of the algorithm, and the corruption of the central authority. Aiming at the privacy of authentication, we propose Zero-Cerd, a self-blindable anonymous authentication system based on blockchain and dynamic accumulator. The self-blinding properties of the credential enable the users themselves to generate a new validly pseudonymous credential. With the help of zero-knowledge proof technology, users can prove the validity of their credentials without disclosing any information. Security analysis shows that our scheme has achieved the expected security objectives. Compared with the existing schemes, our scheme has the advantages of complete anonymity and high efficiency, and is more suitable for IoT applications with privacy protection requirements. While the internet of things brings convenience to people’s lives, it will also bring people hidden worries about data security. As an important barrier to protect data security, identity authentication is widely used in the internet of things. However, it is necessary to protect users' identity privacy while authenticating their identity. Anonymous authentication technology is often used to solve the contradiction between legitimacy and privacy in the authentication process. The existing anonymous authentication scheme has many problems in practical application such as the inability to achieve complete anonymity, the high computational complexity of the algorithm, and the corruption of the central authority. Aiming at the privacy of authentication, we propose Zero-Cerd, a self-blindable anonymous authentication system based on blockchain and dynamic accumulator. The self-blinding properties of the credential enable the users themselves to generate a new validly pseudonymous credential. With the help of zero-knowledge proof technology, users can prove the validity of their credentials without disclosing any information. Security analysis shows that our scheme has achieved the expected security objectives. Compared with the existing schemes, our scheme has the advantages of complete anonymity and high efficiency, and is more suitable for IoT applications with privacy protection requirements.
Principled Design of Translation, Scale, and Rotation Invariant Variation Operators for Metaheuristics
TIAN Ye, ZHANG Xingyi, HE Cheng, TAN Kay Chen, JIN Yaochu
, doi: 10.1049/cje.2022.00.100
摘要:
In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems. While most variation operators in existing metaheuristics are empirically designed, this paper aims to design new operators automatically, which are expected to be search space independent and thus exhibit robust performance on different problems. For this purpose, this work first investigates the influence of translation invariance, scale invariance, and rotation invariance on the search behavior and performance of some representative operators. We deduce the generic form of translation, scale, and rotation invariant operators. A principled approach is proposed for the automated design of operators, which searches for high-performance operators based on the deduced generic form. The experimental results demonstrate that the operators generated by the proposed approach outperform state-of-the-art ones on a variety of problems with complex landscapes and up to 1000 decision variables. In the past three decades, a large number of metaheuristics have been proposed and shown high performance in solving complex optimization problems. While most variation operators in existing metaheuristics are empirically designed, this paper aims to design new operators automatically, which are expected to be search space independent and thus exhibit robust performance on different problems. For this purpose, this work first investigates the influence of translation invariance, scale invariance, and rotation invariance on the search behavior and performance of some representative operators. We deduce the generic form of translation, scale, and rotation invariant operators. A principled approach is proposed for the automated design of operators, which searches for high-performance operators based on the deduced generic form. The experimental results demonstrate that the operators generated by the proposed approach outperform state-of-the-art ones on a variety of problems with complex landscapes and up to 1000 decision variables.
Modeling and Measurement of 3D Solenoid Inductor Based on Through-Silicon Vias
YIN Xiangkun, WANG Fengjuan, ZHU Zhangming, Vasilis F. Pavlidis, LIU Xiaoxian, LU Qijun, LIU Yang, YANG Yintang
, doi: 10.1049/cje.2020.00.340
摘要:
Through-silicon via (TSV) provides vertical interconnectivity among the stacked dies in three-dimensional integrated circuits (3D ICs) and is a promising option to minimize 3D solenoid inductors for on-chip radio-frequency applications. In this paper, a rigorous analytical inductance model of 3D solenoid inductor is proposed based on the concept of loop and partial inductance. And a series of 3D samples are fabricated on 12" high-resistivity silicon wafer using low-cost standard CMOS-compatible process. The results of the proposed model match very well with those obtained by simulation and measurement. With this model, the inductance can be estimated accurately and efficiently over a wide range of inductor windings, TSV height, space, and pitch. Through-silicon via (TSV) provides vertical interconnectivity among the stacked dies in three-dimensional integrated circuits (3D ICs) and is a promising option to minimize 3D solenoid inductors for on-chip radio-frequency applications. In this paper, a rigorous analytical inductance model of 3D solenoid inductor is proposed based on the concept of loop and partial inductance. And a series of 3D samples are fabricated on 12" high-resistivity silicon wafer using low-cost standard CMOS-compatible process. The results of the proposed model match very well with those obtained by simulation and measurement. With this model, the inductance can be estimated accurately and efficiently over a wide range of inductor windings, TSV height, space, and pitch.
Multi-scale Global Retrieval and Temporal-Spatial Consistency Matching based long-term Tracking Network
SANG Haifeng, LI Gongming, ZHAO Ziyu
, doi: 10.1049/cje.2021.00.195
摘要:
Compared with the traditional short-term object tracking task based on temporal-spatial consistency, the long-term object tracking task faces the challenges of object disappearance, dramatic changes in object scale, and object appearance. To address these challenges and problems, in this paper we propose a Multi-scale Global Retrieval and Temporal-Spatial Consistency Matching based long-term Tracking Network (MTTNet). MTTNet regards the long-term tracking task as a single sample object detection task and takes full advantage of the temporal-spatial consistency assumption between adjacent video frames to improve the tracking accuracy. MTTNet utilizes the information of single sample as guidance to perform full-image multi-scale retrieval on any instance and does not require online learning and trajectory refinement. Any type of error generated during the detection process will not affect its performance on subsequent video frames. This can overcome the accumulation of errors in the tracking process of traditional object tracking networks. We introduce Atrous Spatial Pyramid Pooling to address the challenge of dramatic changes in the scale and the appearance of the object. On the experimental results, MTTNet can achieve better performance than composite processing methods on two large datasets. Compared with the traditional short-term object tracking task based on temporal-spatial consistency, the long-term object tracking task faces the challenges of object disappearance, dramatic changes in object scale, and object appearance. To address these challenges and problems, in this paper we propose a Multi-scale Global Retrieval and Temporal-Spatial Consistency Matching based long-term Tracking Network (MTTNet). MTTNet regards the long-term tracking task as a single sample object detection task and takes full advantage of the temporal-spatial consistency assumption between adjacent video frames to improve the tracking accuracy. MTTNet utilizes the information of single sample as guidance to perform full-image multi-scale retrieval on any instance and does not require online learning and trajectory refinement. Any type of error generated during the detection process will not affect its performance on subsequent video frames. This can overcome the accumulation of errors in the tracking process of traditional object tracking networks. We introduce Atrous Spatial Pyramid Pooling to address the challenge of dramatic changes in the scale and the appearance of the object. On the experimental results, MTTNet can achieve better performance than composite processing methods on two large datasets.
Attrleaks on the Edge: Exploiting Information Leakage from Privacy-Preserving Co-Inference
WANG Zhibo, LIU Kaixin, HU Jiahui, REN Ju, GUO Hengchang, YUAN Wei
, doi: 10.1049/cje.2022.00.031
摘要:
Collaborative Inference (co-inference) accelerates deep neural network inference via extracting representations at the device and making predictions at the edge server, which however might disclose the sensitive information about private attributes of users (e.g., race). Although many privacy-preserving mechanisms on co-inference have been proposed to eliminate privacy concerns, privacy leakage of sensitive attributes might still happen during inference. In this paper, we explore privacy leakage against privacy-preserving co-inference by decoding the uploaded representations into a vulnerable form. We propose a novel attack framework AttrLeaks, which consists of the shadow model of feature extractor (FE), the susceptibility reconstruction decoder, and the private attribute classifier. Based on our observation that values in inner layers of FE (internal representation) are more sensitive to attack, the shadow model is proposed to simulate the FE of the victim in the black-box scenario and generates the internal representations. Then, the susceptibility reconstruction decoder is designed to transform the uploaded representations of the victim into the vulnerable form, which enables the malicious classifier to easily predict the private attributes. Extensive experimental results demonstrate that AttrLeaks outperforms the state-of-the-art in terms of attack success rate. Collaborative Inference (co-inference) accelerates deep neural network inference via extracting representations at the device and making predictions at the edge server, which however might disclose the sensitive information about private attributes of users (e.g., race). Although many privacy-preserving mechanisms on co-inference have been proposed to eliminate privacy concerns, privacy leakage of sensitive attributes might still happen during inference. In this paper, we explore privacy leakage against privacy-preserving co-inference by decoding the uploaded representations into a vulnerable form. We propose a novel attack framework AttrLeaks, which consists of the shadow model of feature extractor (FE), the susceptibility reconstruction decoder, and the private attribute classifier. Based on our observation that values in inner layers of FE (internal representation) are more sensitive to attack, the shadow model is proposed to simulate the FE of the victim in the black-box scenario and generates the internal representations. Then, the susceptibility reconstruction decoder is designed to transform the uploaded representations of the victim into the vulnerable form, which enables the malicious classifier to easily predict the private attributes. Extensive experimental results demonstrate that AttrLeaks outperforms the state-of-the-art in terms of attack success rate.
Deep Contextual Representation Learning for Identifying Essential Proteins via Integrating Multisource Protein Features
LI Weihua, LIU Wenyang, GUO Yanbu, WANG Bingyi, QING Hua
, doi: 10.1049/cje.2022.00.053
摘要:
Essential proteins with biological functions are necessary for the survival of organisms. Computational recognition methods of essential proteins can reduce the workload and provide candidate proteins for biologists. However, existing methods fail to efficiently identify essential proteins, and generally do not fully use amino acid sequence information to improve the performance of essential protein recognition. In this work, we proposed an end-to-end deep contextual representation learning framework called DeepIEP to automatically learn biological discriminative features without prior knowledge based on protein network heterogeneous information. Specifically, the model attaches amino acid sequences as the attributes of each protein node in the protein interaction network, and then automatically learns topological features from protein interaction networks by graph embedding algorithms. Next, multi-scale convolutions and gated recurrent unit networks are used to extract contextual features from gene expression profiles. The extensive experiments confirm that our DeepIEP is an effective and efficient feature learning framework for identifying essential proteins and contextual features of protein sequences can improve the recognition performance of essential proteins. Essential proteins with biological functions are necessary for the survival of organisms. Computational recognition methods of essential proteins can reduce the workload and provide candidate proteins for biologists. However, existing methods fail to efficiently identify essential proteins, and generally do not fully use amino acid sequence information to improve the performance of essential protein recognition. In this work, we proposed an end-to-end deep contextual representation learning framework called DeepIEP to automatically learn biological discriminative features without prior knowledge based on protein network heterogeneous information. Specifically, the model attaches amino acid sequences as the attributes of each protein node in the protein interaction network, and then automatically learns topological features from protein interaction networks by graph embedding algorithms. Next, multi-scale convolutions and gated recurrent unit networks are used to extract contextual features from gene expression profiles. The extensive experiments confirm that our DeepIEP is an effective and efficient feature learning framework for identifying essential proteins and contextual features of protein sequences can improve the recognition performance of essential proteins.
HRPose: Real-Time High-Resolution 6D Pose Estimation Network Using Knowledge Distillation
GUAN Qi, SHENG Zihao, XUE Shibei
, doi: 10.1049/cje.2021.00.211
摘要:
Real-time 6D object pose estimation is essential for many real-world applications, such as robotic grasping and augmented reality. To achieve an accurate object pose estimation from RGB images in real-time, we propose an effective and lightweight model, namely High-Resolution 6D Pose Estimation Network (HRPose). We adopt the efficient and small HRNetV2-W18 as a feature extractor to reduce computational burdens while generating accurate 6D poses. With only 33% of the model size and lower computational costs, our HRPose achieves comparable performance compared with state-of-the-art models. Moreover, by transferring knowledge from a large model to our proposed HRPose through output and feature-similarity distillations, the performance of our HRPose is improved in effectiveness and efficiency. Numerical experiments on the widely-used benchmark LINEMOD demonstrate the superiority of our proposed HRPose against state-of-the-art methods. Real-time 6D object pose estimation is essential for many real-world applications, such as robotic grasping and augmented reality. To achieve an accurate object pose estimation from RGB images in real-time, we propose an effective and lightweight model, namely High-Resolution 6D Pose Estimation Network (HRPose). We adopt the efficient and small HRNetV2-W18 as a feature extractor to reduce computational burdens while generating accurate 6D poses. With only 33% of the model size and lower computational costs, our HRPose achieves comparable performance compared with state-of-the-art models. Moreover, by transferring knowledge from a large model to our proposed HRPose through output and feature-similarity distillations, the performance of our HRPose is improved in effectiveness and efficiency. Numerical experiments on the widely-used benchmark LINEMOD demonstrate the superiority of our proposed HRPose against state-of-the-art methods.
A fine-grained object detection model for aerial images based on yolov5 deep neural network
ZHANG Rui, XIE Cong, DENG Liwei
, doi: 10.1049/cje.2022.00.044
摘要:
Currently, many advanced object detection algorithms are mainly based on natural scenes object and rarely dedicated to fine-grained objects. This seriously limits the application of these advanced detection algorithms in remote sensing object detection. Therefore, how to apply horizontal detection in remote sensing images has important research significance. The mainstream remote sensing object detection algorithms achieve this task by angle regression, but the periodicity of angle leads to very large losses in this regression method, which increases the difficulty of model learning. Circular smooth label(CSL) solved this problem well by transforming the regression of angle into a classification form. Yolov5 combines many excellent modules and methods in recent years, which greatly improves the detection accuracy of small objects. Therefore, we use yolov5 as a baseline and combine the CSL method to learn the angle of arbitrarily oriented targets, and distinguish the fine-grained between instance classes by adding an attention mechanism module to accomplish the fine-grained target detection task for remote sensing images. Finally, our improved model achieves an average category accuracy of 39.2 on the FAIR1M dataset. Although our method does not achieve satisfactory results, this approach is very efficient and simple, reducing the hardware requirements of the model. Currently, many advanced object detection algorithms are mainly based on natural scenes object and rarely dedicated to fine-grained objects. This seriously limits the application of these advanced detection algorithms in remote sensing object detection. Therefore, how to apply horizontal detection in remote sensing images has important research significance. The mainstream remote sensing object detection algorithms achieve this task by angle regression, but the periodicity of angle leads to very large losses in this regression method, which increases the difficulty of model learning. Circular smooth label(CSL) solved this problem well by transforming the regression of angle into a classification form. Yolov5 combines many excellent modules and methods in recent years, which greatly improves the detection accuracy of small objects. Therefore, we use yolov5 as a baseline and combine the CSL method to learn the angle of arbitrarily oriented targets, and distinguish the fine-grained between instance classes by adding an attention mechanism module to accomplish the fine-grained target detection task for remote sensing images. Finally, our improved model achieves an average category accuracy of 39.2 on the FAIR1M dataset. Although our method does not achieve satisfactory results, this approach is very efficient and simple, reducing the hardware requirements of the model.
Track-oriented Marginal Poisson Multi-Bernoulli Mixture Filter for Extended Target Tracking
DU Haocui, XIE Weixin, LIU Zongxiang, LI Liangqun
, doi: 10.1049/cje.2021.00.194
摘要:
In this paper, we derive and propose a track-oriented marginal Poisson multi-Bernoulli mixture (TO-MPMBM) filter to address the problem that the standard random finite set (RFS) filters cannot build continuous trajectories for multiple extended targets. Firstly, the Poisson point process (PPP) model and the multi-Bernoulli mixture (MBM) model are used to establish the set of birth trajectories and the set of existing trajectories, respectively. Secondly, the proposed filter recursively propagates the marginal association distributions and the Poisson multi-Bernoulli mixture (PMBM) density over the set of alive trajectories. Finally, after pruning and merging process, the trajectories with existence probability greater than the given threshold are extracted as the estimated target trajectories. A comparison of the proposed filter with the existing trajectory filters in two classical scenarios confirms the validity and reliability of the TO-MPMBM filter. In this paper, we derive and propose a track-oriented marginal Poisson multi-Bernoulli mixture (TO-MPMBM) filter to address the problem that the standard random finite set (RFS) filters cannot build continuous trajectories for multiple extended targets. Firstly, the Poisson point process (PPP) model and the multi-Bernoulli mixture (MBM) model are used to establish the set of birth trajectories and the set of existing trajectories, respectively. Secondly, the proposed filter recursively propagates the marginal association distributions and the Poisson multi-Bernoulli mixture (PMBM) density over the set of alive trajectories. Finally, after pruning and merging process, the trajectories with existence probability greater than the given threshold are extracted as the estimated target trajectories. A comparison of the proposed filter with the existing trajectory filters in two classical scenarios confirms the validity and reliability of the TO-MPMBM filter.
Robust Beamforming Design for IRS-Aided Cognitive Radio Networks with Bounded CSI Errors
ZHANG Lei, WANG Yu, SHANG Yulong, TIAN Jianjie, JIA Ziyan
, doi: 10.1049/cje.2021.00.254
摘要:
In this paper, intelligent reflecting surface (IRS) is introduced to enhance the performance of cognitive radio (CR) systems. The robust beamforming is designed based on combined bounded channel state information (CSI) error for primary user (PU) related channels. The transmit precoding at the secondary user (SU) transmitter and phase shifts at the IRS are jointly optimized to minimize the SU's total transmit power subject to the quality of service of SUs, the limited interference imposed on the PU and unit-modulus of the reflective beamforming. Simulation results verify the efficiency of the proposed algorithm and reveal that the number of phase shifts at IRS should be carefully chosen to obtain a tradeoff between the total minimum transmit power and the feasibility rate of the optimization problem. In this paper, intelligent reflecting surface (IRS) is introduced to enhance the performance of cognitive radio (CR) systems. The robust beamforming is designed based on combined bounded channel state information (CSI) error for primary user (PU) related channels. The transmit precoding at the secondary user (SU) transmitter and phase shifts at the IRS are jointly optimized to minimize the SU's total transmit power subject to the quality of service of SUs, the limited interference imposed on the PU and unit-modulus of the reflective beamforming. Simulation results verify the efficiency of the proposed algorithm and reveal that the number of phase shifts at IRS should be carefully chosen to obtain a tradeoff between the total minimum transmit power and the feasibility rate of the optimization problem.
Unique Parameters Selection Strategy of Linear Canonical Wigner Distribution via Multiobjective Optimization Modeling
SHI Xiya, WU Anyang, SUN Yun, QIANG Shengzhou, JIANG Xian, HAN Puyu, CHEN Yunjie, ZHANG Zhichao
, doi: 10.1049/cje.2021.00.338
摘要:
There are many kinds of linear canonical transform (LCT)-based Wigner distributions (WDs), which are both very effective in detecting noisy linear frequency-modulated (LFM) signals. Among WDs in LCT domains, the instantaneous cross-correlation function type of Wigner distribution (ICFWD) attracts much attention from scholars, because it achieves not only low computational complexity but also good detection performance. However, the existing LCT free parameters selection strategy, a solution of the expectation-based output signal-to-noise ratio (SNR) optimization model, is not unique. In this paper, by introducing the variance-based output SNR optimization model, a multiobjective optimization model is established. Then the existence and uniqueness of the optimal parameters of ICFWD are investigated. The solution of the multiobjective optimization model with respect to one-component LFM signal added with zero-mean stationary circular Gaussian noise is derived. A comparison of the unique parameters selection strategy and the previous one is carried out. The theoretical results are also verified by numerical simulations. There are many kinds of linear canonical transform (LCT)-based Wigner distributions (WDs), which are both very effective in detecting noisy linear frequency-modulated (LFM) signals. Among WDs in LCT domains, the instantaneous cross-correlation function type of Wigner distribution (ICFWD) attracts much attention from scholars, because it achieves not only low computational complexity but also good detection performance. However, the existing LCT free parameters selection strategy, a solution of the expectation-based output signal-to-noise ratio (SNR) optimization model, is not unique. In this paper, by introducing the variance-based output SNR optimization model, a multiobjective optimization model is established. Then the existence and uniqueness of the optimal parameters of ICFWD are investigated. The solution of the multiobjective optimization model with respect to one-component LFM signal added with zero-mean stationary circular Gaussian noise is derived. A comparison of the unique parameters selection strategy and the previous one is carried out. The theoretical results are also verified by numerical simulations.
Technique for Recovering Wavefront Phase Bad Points by Deep Learning
WU Jiali, LIANG Jingyuan, FEI Shaolong, ZHONG Xirui
, doi: 10.1049/cje.2022.00.008
摘要:
In adaptive optics (AO) systems, the bad spot detected by the wavefront detector affects the wavefront reconstruction accuracy. A convolutional neural network (CNN) model is established to estimate the missing information on bad points, reduce the reconstruction error of the distorted wavefront. By training 10,000 groups of spot array images and the corresponding 30th order Zernike coefficient samples, learns the relationship between the light intensity image and the Zernike coefficient, and predicts the Zernike mode coefficient based on the spot array image to restore the wavefront. Following the wavefront restoration of 1000 groups of test set samples, the root mean square (RMS) error between the predicted value and the real value was maintained at approximately 0.2 μ m. Field wavefront correction experiments were carried out on three links of 600 m, 1.3 km and 10 km. The wavefront peak-to-valley (PV) values corrected by the CNN decreased from 12.964 µ m, 13.958 µ m, and 31.310 µ m to 0.425, 3.061, and 11.156 µ m, respectively, and the RMS values decreased from 2.156 µ m, 9.158 µ m, and 12.949 µ m to approximately 0.166 µ m, 0.852 µ m, and 6.963 µ m, respectively. The results show that the CNN method predicts the missing wavefront information of the sub-aperture from the bad spot image, reduces the wavefront restoration error, and improves the wavefront correction performance. In adaptive optics (AO) systems, the bad spot detected by the wavefront detector affects the wavefront reconstruction accuracy. A convolutional neural network (CNN) model is established to estimate the missing information on bad points, reduce the reconstruction error of the distorted wavefront. By training 10,000 groups of spot array images and the corresponding 30th order Zernike coefficient samples, learns the relationship between the light intensity image and the Zernike coefficient, and predicts the Zernike mode coefficient based on the spot array image to restore the wavefront. Following the wavefront restoration of 1000 groups of test set samples, the root mean square (RMS) error between the predicted value and the real value was maintained at approximately 0.2 μ m. Field wavefront correction experiments were carried out on three links of 600 m, 1.3 km and 10 km. The wavefront peak-to-valley (PV) values corrected by the CNN decreased from 12.964 µ m, 13.958 µ m, and 31.310 µ m to 0.425, 3.061, and 11.156 µ m, respectively, and the RMS values decreased from 2.156 µ m, 9.158 µ m, and 12.949 µ m to approximately 0.166 µ m, 0.852 µ m, and 6.963 µ m, respectively. The results show that the CNN method predicts the missing wavefront information of the sub-aperture from the bad spot image, reduces the wavefront restoration error, and improves the wavefront correction performance.
Improving Cross-Corpus Speech Emotion Recognition using Deep Local Domain Adaptation
ZHAO Huijuan, YE Ning, WANG Ruchuan
, doi: 10.1049/cje.2021.00.196
摘要:
Due to insufficient data and high cost of data annotation, it is usually necessary to use knowledge transfer to recognize speech emotion. However, the uncertainty and subjectivity of emotion make speech emotion recognition based on transfer learning more challenging. Domain adaptation based on maximum mean discrepancy considers the marginal alignment of source domain and target domain, but without paying regard to the class prior distribution in both domains, which reduces the transfer efficiency. To solve this problem, a novel cross-corpus speech emotion recognition framework based on local domain adaption is proposed, in which a local weighted maximum mean discrepancy is used to evaluate the distance between different emotion datasets. Experimental results show that the cross-corpus speech emotion recognition has been improved when compared with other cross-corpus methods including global domain adaptation and cross-corpus speech emotion recognition directly. Due to insufficient data and high cost of data annotation, it is usually necessary to use knowledge transfer to recognize speech emotion. However, the uncertainty and subjectivity of emotion make speech emotion recognition based on transfer learning more challenging. Domain adaptation based on maximum mean discrepancy considers the marginal alignment of source domain and target domain, but without paying regard to the class prior distribution in both domains, which reduces the transfer efficiency. To solve this problem, a novel cross-corpus speech emotion recognition framework based on local domain adaption is proposed, in which a local weighted maximum mean discrepancy is used to evaluate the distance between different emotion datasets. Experimental results show that the cross-corpus speech emotion recognition has been improved when compared with other cross-corpus methods including global domain adaptation and cross-corpus speech emotion recognition directly.
A Beam-Steering Broadband Microstrip Antenna with High Isolation
JIANG Zhaoneng, SHA Yongxin, NIE Liying, XUAN Xiaofeng
, doi: 10.1049/cje.2021.00.452
摘要:
In this paper, a 4.2-7.2 GHz (52.6%) beam-steering microstrip antenna was proposed. The proposed antenna consists of three tapered slots and feeds. The three radiation directions of the antenna on the plane are independent of each other, and the three feeds correspond to the three radiation structures. Symmetry isolation trenches are introduced to improve isolation between different ports. Radiation pattern simulation and measurement show horizontal beam steering at the sampled frequencies of 4.2, 5, 6, and 7.2 GHz. The results shows that the overlapped beam of the three ports in the E-plane and H-plane can cover more than 200 degrees and 60 degrees, respectively. Apart from the capability of beam-steering, high isolation (> 28 dB) of the proposed antenna in the operating band is obtained. In this paper, a 4.2-7.2 GHz (52.6%) beam-steering microstrip antenna was proposed. The proposed antenna consists of three tapered slots and feeds. The three radiation directions of the antenna on the plane are independent of each other, and the three feeds correspond to the three radiation structures. Symmetry isolation trenches are introduced to improve isolation between different ports. Radiation pattern simulation and measurement show horizontal beam steering at the sampled frequencies of 4.2, 5, 6, and 7.2 GHz. The results shows that the overlapped beam of the three ports in the E-plane and H-plane can cover more than 200 degrees and 60 degrees, respectively. Apart from the capability of beam-steering, high isolation (> 28 dB) of the proposed antenna in the operating band is obtained.
A Novel Adaptive InSAR Phase Filtering Method Based on Complexity Factors
XU Huaping, WANG Yuan, LI Chunsheng, ZENG Guobing, LI Shuo, LI Shuang, REN Chong
, doi: 10.1049/cje.2021.00.280
摘要:
Phase filtering is an essential step in interferometric synthetic aperture radar (InSAR). For interferograms with complicated and changeable terrain, the increasing resolution of InSAR images makes it even more difficult. In this paper, a novel adaptive InSAR phase filtering method based on complexity factors is proposed. Firstly, three complexity factors based on the noise distribution and terrain slope information of the interferogram are selected. The complexity indicator composed of three complexity factors is used to guide the adaptive selection of the most suitable and effective filtering strategies for different areas. Then, the complexity scalar is calculated, which can guide the adaptive local fringe frequency (LFF) estimation and adaptive parameters calculation in different filter methods. Finally, validations are performed on the simulated and real data. The performance comparison between the other three representative phase filtering method and the proposed method have validated the effectiveness and superiority of the proposed method. Phase filtering is an essential step in interferometric synthetic aperture radar (InSAR). For interferograms with complicated and changeable terrain, the increasing resolution of InSAR images makes it even more difficult. In this paper, a novel adaptive InSAR phase filtering method based on complexity factors is proposed. Firstly, three complexity factors based on the noise distribution and terrain slope information of the interferogram are selected. The complexity indicator composed of three complexity factors is used to guide the adaptive selection of the most suitable and effective filtering strategies for different areas. Then, the complexity scalar is calculated, which can guide the adaptive local fringe frequency (LFF) estimation and adaptive parameters calculation in different filter methods. Finally, validations are performed on the simulated and real data. The performance comparison between the other three representative phase filtering method and the proposed method have validated the effectiveness and superiority of the proposed method.
MalFSM: Feature Subset Selection Method for Malware Family Classification
KONG Zixiao, XUE Jingfeng, WANG Yong, ZHANG Qian, HAN Weijie, ZHU Yufen
, doi: 10.1049/cje.2022.00.038
摘要:
Malware detection has been a hot spot in cyberspace security and academic research. We investigate the correlation between the opcode features of malicious samples and perform feature extraction, selection and fusion by filtering redundant features, thus alleviating the dimensional disaster problem and achieving efficient identification of malware families for proper classification. Malware authors use obfuscation technology to generate a large number of malware variants, which imposes a heavy analysis burden on security researchers and consumes a lot of resources in both time and space. To this end, we propose the MalFSM framework. Through the feature selection method, we reduce the 735 opcode features contained in the Kaggle dataset to 16, and then fuse on metadata features (count of file lines and file size) for a total of 18 features, and find that the machine learning classification is efficient and high accuracy. We analyzed the correlation between the opcode features of malicious samples and interpreted the selected features. Our comprehensive experiments show that the highest classification accuracy of MalFSM can reach up to 98.6% and the classification time is only 7.76s on the Kaggle malware dataset of Microsoft. Malware detection has been a hot spot in cyberspace security and academic research. We investigate the correlation between the opcode features of malicious samples and perform feature extraction, selection and fusion by filtering redundant features, thus alleviating the dimensional disaster problem and achieving efficient identification of malware families for proper classification. Malware authors use obfuscation technology to generate a large number of malware variants, which imposes a heavy analysis burden on security researchers and consumes a lot of resources in both time and space. To this end, we propose the MalFSM framework. Through the feature selection method, we reduce the 735 opcode features contained in the Kaggle dataset to 16, and then fuse on metadata features (count of file lines and file size) for a total of 18 features, and find that the machine learning classification is efficient and high accuracy. We analyzed the correlation between the opcode features of malicious samples and interpreted the selected features. Our comprehensive experiments show that the highest classification accuracy of MalFSM can reach up to 98.6% and the classification time is only 7.76s on the Kaggle malware dataset of Microsoft.
Delay and energy consumption oriented UAV inspection business collaboration computing mechanism in edge computing based power IoT
SHAO Sujie, LI Yi, GUO Shaoyong, WANG Chenhui, CHEN Xingyu, QIU Xuesong
, doi: 10.1049/cje.2021.00.312
摘要:
With the development of Internet of Things (IoT) technology and smart grid infrastructure, edge computing has become an effective solution to meet the delay requirements of the electric power IoT. Due to the limitation of battery capacity and data transmission mode of IoT terminals, the business collaboration computing must take into account the energy consumption of the terminals. Since delay and energy consumption are the optimization goals of two co-directional changes, it is difficult to find a business collaboration computing mechanism that simultaneously minimizes delay and energy consumption. This paper takes the Unmanned Aerial Vehicle (UAV) inspection business scenario in the electric power IoT based on edge computing as the representative, and proposes a two-stage business collaboration computing mechanism including resources allocation and task allocation to optimize the business delay and energy consumption of UAV by decoupling the complex correlation between resource allocation and task allocation. Firstly, a Steepest Descent (SD) resource allocation algorithm is proposed. Secondly, an improved multiobjective evolutionary algorithm based on decomposition (MOEA/D-IM) by dynamically adjusting the cross distribution index and the size of neighborhood is proposed as a task allocation algorithm to minimize business delay and energy consumption on the basis of resource allocation. Simulation results show that our algorithms can respectively reduce the business delay and energy consumption by more than 6.4% and 9.5% compared with other algorithms. With the development of Internet of Things (IoT) technology and smart grid infrastructure, edge computing has become an effective solution to meet the delay requirements of the electric power IoT. Due to the limitation of battery capacity and data transmission mode of IoT terminals, the business collaboration computing must take into account the energy consumption of the terminals. Since delay and energy consumption are the optimization goals of two co-directional changes, it is difficult to find a business collaboration computing mechanism that simultaneously minimizes delay and energy consumption. This paper takes the Unmanned Aerial Vehicle (UAV) inspection business scenario in the electric power IoT based on edge computing as the representative, and proposes a two-stage business collaboration computing mechanism including resources allocation and task allocation to optimize the business delay and energy consumption of UAV by decoupling the complex correlation between resource allocation and task allocation. Firstly, a Steepest Descent (SD) resource allocation algorithm is proposed. Secondly, an improved multiobjective evolutionary algorithm based on decomposition (MOEA/D-IM) by dynamically adjusting the cross distribution index and the size of neighborhood is proposed as a task allocation algorithm to minimize business delay and energy consumption on the basis of resource allocation. Simulation results show that our algorithms can respectively reduce the business delay and energy consumption by more than 6.4% and 9.5% compared with other algorithms.
Coupling enhancement of THz metamaterials source with parallel multiple beams
ZHANG Kaichun, FENG Yuming, ZHAO Xiaoyan, HU Jincheng, XIONG Neng, GUO Sidou, TANG Lin, LIU Diwei
, doi: 10.1049/cje.2022.00.032
摘要:
In this paper, we propose a terahertz radiation source over the R-band (220-325 GHz) based on metamaterials (MTMs) structure and parallel multiple beams. The effective permittivity and permeability of the slow-wave structure (SWS) can be obtained through the S-parameter retrieval approach, using numerical simulation. Additionally, the electromagnetic properties of the MTMs structure are analyzed, including the dispersion and the coupling impedance. Furthermore, we simulate the beam-wave interaction of the backward oscillator (BWO) with MTMs structure and parallel multiple beams by 3-D particle-in-cell (PIC) code. It is observed that parallel multiple beams can highly enhance the beam-wave interaction and greatly enlarge the output power. These results indicate that the saturated (peak) output power is approximately 63W with the efficiency of roughly 6% at the operating frequency of 231 GHz, under the beam voltage of 35 kV and total current of 30 mA (6-beam) respectively. Meanwhile, the BWO can generate power of 10 W-80 W in the tunable frequency of 220 GHz-240 GHz. In this paper, we propose a terahertz radiation source over the R-band (220-325 GHz) based on metamaterials (MTMs) structure and parallel multiple beams. The effective permittivity and permeability of the slow-wave structure (SWS) can be obtained through the S-parameter retrieval approach, using numerical simulation. Additionally, the electromagnetic properties of the MTMs structure are analyzed, including the dispersion and the coupling impedance. Furthermore, we simulate the beam-wave interaction of the backward oscillator (BWO) with MTMs structure and parallel multiple beams by 3-D particle-in-cell (PIC) code. It is observed that parallel multiple beams can highly enhance the beam-wave interaction and greatly enlarge the output power. These results indicate that the saturated (peak) output power is approximately 63W with the efficiency of roughly 6% at the operating frequency of 231 GHz, under the beam voltage of 35 kV and total current of 30 mA (6-beam) respectively. Meanwhile, the BWO can generate power of 10 W-80 W in the tunable frequency of 220 GHz-240 GHz.
Towards Evaluating the Robustness of Adversarial Attacks Against Image Scaling Transformation
ZHENG Jiamin, ZHANG Yaoyuan, LI Yuanzhang, WU Shangbo, YU Xiao
, doi: 10.1049/cje.2021.00.309
摘要:
The robustness of adversarial examples to image scaling transformation is usually ignored when most existing adversarial attacks are proposed. In contrast, image scaling is often the first step of the model to transfer various sizes of input images into fixed ones. We evaluate the impact of image scaling on the robustness of adversarial examples applied to image classification tasks. We set up an image scaling system to provide a basis for robustness evaluation and conduct experiments in different situations to explore the relationship between image scaling and the robustness of adversarial examples. Experiment results show that various scaling algorithms have a similar impact on the robustness of adversarial examples, but the scaling ratio significantly impacts it. The robustness of adversarial examples to image scaling transformation is usually ignored when most existing adversarial attacks are proposed. In contrast, image scaling is often the first step of the model to transfer various sizes of input images into fixed ones. We evaluate the impact of image scaling on the robustness of adversarial examples applied to image classification tasks. We set up an image scaling system to provide a basis for robustness evaluation and conduct experiments in different situations to explore the relationship between image scaling and the robustness of adversarial examples. Experiment results show that various scaling algorithms have a similar impact on the robustness of adversarial examples, but the scaling ratio significantly impacts it.
An Improved Path Delay Variability Model via Multi-level Fan-out-of-4 Metric for Wide-Voltage-Range Digital CMOS Circuits
CUI Yuqiang, SHAN Weiwei, DAI Wentao, LIU Xinning, GUO Jingjing, CAO Peng
, doi: 10.1049/cje.2021.00.447
摘要:
In advanced CMOS technology, process, voltage, and temperature (PVT) variations increase the paths’ latency in digital circuits, especially when operating at a low supply voltage. The fan-out-of-4 inverter chain (FO4 chain) metric has been proven to be a good metric to estimate the path’s delay variability, whereas the previous work ignored the non-independent characteristic between the adjacent cells in a path. In this study, an improved model of path delay variability is established to describe the relationship between the paths’ max-delay variability and a FO4 chain, which is based on multilevel FO4 metric and circuit-level parameters knobs (i.e., cell topology and driving strength) of the first few cells. We take the slew and load into account to improve the accuracy of this framework. Examples of 28 nm and 40 nm digital circuits show that our model conforms with Monte Carlo simulations as well as fabricated chips’ measurements. It is able to model the delay variability effectively to speed up the design process with limited accuracy loss. It also provides a deeper understanding and quick estimation of the path delay variability from the near-threshold to nominal voltages. In advanced CMOS technology, process, voltage, and temperature (PVT) variations increase the paths’ latency in digital circuits, especially when operating at a low supply voltage. The fan-out-of-4 inverter chain (FO4 chain) metric has been proven to be a good metric to estimate the path’s delay variability, whereas the previous work ignored the non-independent characteristic between the adjacent cells in a path. In this study, an improved model of path delay variability is established to describe the relationship between the paths’ max-delay variability and a FO4 chain, which is based on multilevel FO4 metric and circuit-level parameters knobs (i.e., cell topology and driving strength) of the first few cells. We take the slew and load into account to improve the accuracy of this framework. Examples of 28 nm and 40 nm digital circuits show that our model conforms with Monte Carlo simulations as well as fabricated chips’ measurements. It is able to model the delay variability effectively to speed up the design process with limited accuracy loss. It also provides a deeper understanding and quick estimation of the path delay variability from the near-threshold to nominal voltages.
Linguistic Steganalysis via Fusing Multi-granularity Attentional Text Features
WEN Juan, DENG Yaqian, PENG Wanli, XUE Yiming
, doi: 10.1049/cje.2022.00.009
摘要:
Deep-learning-based language models have improved generation-based linguistic steganography, posing a huge challenge for linguistic steganalysis. The existing neural-network-based linguistic steganalysis methods are incompetent to deal with complicated text because they only extract single-granularity features such as global or local text features. To fuse multi-granularity text features, we present a novel linguistic steganalysis method based on attentional LSTMs and short-cut dense CNNs (BiLSTM-SDC). The BiLSTM equipped with the scaled dot-product attention mechanism is used to capture the long dependency representations of the input sentence. The CNN with the short-cut and dense connection is exploited to extract sufficient local semantic features from the word embedding matrix. We connect two structures in parallel, concatenate the long dependency representations and the local semantic features, and classify the stego and cover texts. The results of comparative experiments demonstrate that the proposed method is superior to the previous state-of-the-art linguistic steganalysis. Deep-learning-based language models have improved generation-based linguistic steganography, posing a huge challenge for linguistic steganalysis. The existing neural-network-based linguistic steganalysis methods are incompetent to deal with complicated text because they only extract single-granularity features such as global or local text features. To fuse multi-granularity text features, we present a novel linguistic steganalysis method based on attentional LSTMs and short-cut dense CNNs (BiLSTM-SDC). The BiLSTM equipped with the scaled dot-product attention mechanism is used to capture the long dependency representations of the input sentence. The CNN with the short-cut and dense connection is exploited to extract sufficient local semantic features from the word embedding matrix. We connect two structures in parallel, concatenate the long dependency representations and the local semantic features, and classify the stego and cover texts. The results of comparative experiments demonstrate that the proposed method is superior to the previous state-of-the-art linguistic steganalysis.
Frame Synchronization Method Based on Association Rules for CNAV-2 Messages
LI Xinhao, MA Tao, QIAN Qishu
, doi: 10.1049/cje.2021.00.148
摘要:
The GPS system is a navigation satellite system with high precision, all-weather service, and global coverage, whose main purpose is to provide real-time and continuous global navigation services for the US military, and whose signal interference in wartime is a heavy blow to the US military. Its existing interference measures are classified into two types: blanket jamming and deception jamming, with the latter having better interference effects due to its imperceptibility. Frame synchronization, as the foundation of deception jamming, is a focus of current research on navigation countermeasures. This paper discusses the frame synchronization of CNAV-2 messages in GPS L1C signals and proposes a frame synchronization algorithm based on association rules. It analyzes the structural characteristics of CNAV-1 message data, reveals the hidden mapping relationships in the BCH code sequence of the first sub-frame by applying association rules, and achieves a blind synchronization of navigation messages by counting the types of mapping relationships and calculating the confidence levels. The simulation test results show that the proposed algorithm displays high error resilience and correct recognition rates and demonstrates certain values in engineering applications. The GPS system is a navigation satellite system with high precision, all-weather service, and global coverage, whose main purpose is to provide real-time and continuous global navigation services for the US military, and whose signal interference in wartime is a heavy blow to the US military. Its existing interference measures are classified into two types: blanket jamming and deception jamming, with the latter having better interference effects due to its imperceptibility. Frame synchronization, as the foundation of deception jamming, is a focus of current research on navigation countermeasures. This paper discusses the frame synchronization of CNAV-2 messages in GPS L1C signals and proposes a frame synchronization algorithm based on association rules. It analyzes the structural characteristics of CNAV-1 message data, reveals the hidden mapping relationships in the BCH code sequence of the first sub-frame by applying association rules, and achieves a blind synchronization of navigation messages by counting the types of mapping relationships and calculating the confidence levels. The simulation test results show that the proposed algorithm displays high error resilience and correct recognition rates and demonstrates certain values in engineering applications.
Infrared and visible image fusion based on blur suppression generative adversarial network
YI Shi, LI Xi, LI Li, CHENG Xinghao, WANG Cheng
, doi: 10.1049/cje.2021.00.084
摘要:
The key to multi-sensor image fusion is the fusion of infrared and visible images. Fusion of infrared and visible images with generative adversarial network (GAN) has great advantages in automatic feature extraction and subjective vision improvement. Due to different principle between infrared and visible imaging, the blur phenomenon of edge and texture is caused in the fusion result of GAN. For this purpose, this paper conducts a novel generative adversarial network with blur suppression. Specifically, the generator uses the residual-in-residual dense block with switchable normalization layer (RRDB+SN) as the elemental network block to retain the infrared intensity and the fused image textural details and avoid fusion artifacts. Furthermore, we design an anti-blur loss function based on weber local descriptor (WLD). Finally, numerous experiments are performed qualitatively and quantitatively on public datasets. Results justify that the proposed method can be used to produce a fusion image with sharp edge and clear texture. The key to multi-sensor image fusion is the fusion of infrared and visible images. Fusion of infrared and visible images with generative adversarial network (GAN) has great advantages in automatic feature extraction and subjective vision improvement. Due to different principle between infrared and visible imaging, the blur phenomenon of edge and texture is caused in the fusion result of GAN. For this purpose, this paper conducts a novel generative adversarial network with blur suppression. Specifically, the generator uses the residual-in-residual dense block with switchable normalization layer (RRDB+SN) as the elemental network block to retain the infrared intensity and the fused image textural details and avoid fusion artifacts. Furthermore, we design an anti-blur loss function based on weber local descriptor (WLD). Finally, numerous experiments are performed qualitatively and quantitatively on public datasets. Results justify that the proposed method can be used to produce a fusion image with sharp edge and clear texture.
Remote Data Auditing for Cloud-Assisted WBANs with Pay-as-you-go Business Model
LI Yumei, ZHANG Futai
, doi: 10.1049/cje.2020.00.314
摘要:
As an emerging technology, cloud-assisted Wireless Body Area Networks (WBANs) provide more convenient services to users. Recently, many remote data auditing (RDA) protocols have been proposed to ensure the data integrity and authenticity when data owners outsourced their data to the cloud. However, most of them cannot check data integrity periodically according to the pay-as-you-go business model. These protocols also need high tag generation computation overhead, which brings a heavy burden for data owners. Therefore, we construct a lightweight remote data auditing protocol to overcome all above drawbacks. Our work can be deployed in the public environment without secret channels. It makes use of certificate-based cryptography which gets rid of certificate management problems, key escrow problems, and secret channels. The security analysis illustrates that the proposed protocol is secure. Moreover, the performance evaluation implies that our work is available in cutting down computation and communication overheads. As an emerging technology, cloud-assisted Wireless Body Area Networks (WBANs) provide more convenient services to users. Recently, many remote data auditing (RDA) protocols have been proposed to ensure the data integrity and authenticity when data owners outsourced their data to the cloud. However, most of them cannot check data integrity periodically according to the pay-as-you-go business model. These protocols also need high tag generation computation overhead, which brings a heavy burden for data owners. Therefore, we construct a lightweight remote data auditing protocol to overcome all above drawbacks. Our work can be deployed in the public environment without secret channels. It makes use of certificate-based cryptography which gets rid of certificate management problems, key escrow problems, and secret channels. The security analysis illustrates that the proposed protocol is secure. Moreover, the performance evaluation implies that our work is available in cutting down computation and communication overheads.
Design and Implementation of a Novel Self-bias S-band Broadband GaN Power Amplifier
ZHANG Luchuan, ZHONG Shichang, CHEN Yue
, doi: 10.1049/cje.2021.00.118
摘要:
In this paper, a 3.6 mm gate width GaN HEMT with 0.35 μm gate length process and input and output matching circuits of Nanjing Electronic Devices Institute are used for broadband design respectively, and a novel high-power and high-efficiency self-bias S-band broadband continuous wave GaN power amplifier is realized. Under the working conditions of 2.2 GHz to 2.6 GHz and 32 V drain power supply, the continuous wave output power of the amplifier is more than 20 W, the power gain is more than 15 dB, and the max power added efficiency is more than 65%. The self-bias amplifier simplifies the circuit structure and realizes excellent circuit performance. In this paper, a 3.6 mm gate width GaN HEMT with 0.35 μm gate length process and input and output matching circuits of Nanjing Electronic Devices Institute are used for broadband design respectively, and a novel high-power and high-efficiency self-bias S-band broadband continuous wave GaN power amplifier is realized. Under the working conditions of 2.2 GHz to 2.6 GHz and 32 V drain power supply, the continuous wave output power of the amplifier is more than 20 W, the power gain is more than 15 dB, and the max power added efficiency is more than 65%. The self-bias amplifier simplifies the circuit structure and realizes excellent circuit performance.
Cross Modal Adaptive Few-Shot Learning Based on Task Dependence
DAI Leichao, FENG Lin, SHANG Xinglin, SU Han
, doi: 10.1049/cje.2021.00.093
摘要:
Few-Shot learning (FSL) is a new machine learning method that applies the prior knowledge from some different domains tasks. The existing FSL models of metric-based learning have some drawbacks, such as the extracted features cannot reflect the true data distribution and the generalization ability is weak. In order to solve the problem in the present, we develop a model, named COOPERATE (CrOss mOdal adaPtive fEw-shot leaRning bAsed on Task dEpendence). A feature extraction and task representation method based on task condition network and auxiliary co-training is proposed. Semantic representation is added to each task by combining both visual and textual features. The measurement scale is adjusted to change the property of parameter update of the algorithm. The experimental results show that the COOPERATE has the better performance comparing with all approaches of the monomode and modal alignment FSL. Few-Shot learning (FSL) is a new machine learning method that applies the prior knowledge from some different domains tasks. The existing FSL models of metric-based learning have some drawbacks, such as the extracted features cannot reflect the true data distribution and the generalization ability is weak. In order to solve the problem in the present, we develop a model, named COOPERATE (CrOss mOdal adaPtive fEw-shot leaRning bAsed on Task dEpendence). A feature extraction and task representation method based on task condition network and auxiliary co-training is proposed. Semantic representation is added to each task by combining both visual and textual features. The measurement scale is adjusted to change the property of parameter update of the algorithm. The experimental results show that the COOPERATE has the better performance comparing with all approaches of the monomode and modal alignment FSL.
Design of Pyramidal Horn with Arbitrary E\H Plane Half-Power Beamwidth
ZHANG Wenrui, SHAO Wenyuan, JI Yicai, LI Chao, YANG Guan, LU Wei, FANG Guangyou
, doi: 10.1049/cje.2021.00.212
摘要:
This paper proposed a novel design method for pyramid horns which are under the constraints of 3 dB beamwidth. It is based on the general radiation patterns of E\H planes derived from Huygens’ principle. Through interpolation and fitting techniques, the E\H plane’s maximum aperture error parameter of the pyramid horn is obtained as a function of the angle and aperture electrical size. Firstly, the aperture size of the E (or H) plane is calculated with the help of the optimal gain principle. Secondly, the constraint equation of another plane is derived. Finally, the intersection of constraint equation and interpolation function, which can be solved iteratively, contains all the solution information. The general radiation patterns neglect the influence of the Huygens element factor which makes the error bigger in large design beamwidth. In this paper, through theoretical analysis and simulation experiments, two correction formulas are employed to correct the Huygens element factor’s influence on the E\H planes. Simulation experiments and measurements show that the proposed method has a smaller design error in the range of 0–60 degrees half-power beamwidth. This paper proposed a novel design method for pyramid horns which are under the constraints of 3 dB beamwidth. It is based on the general radiation patterns of E\H planes derived from Huygens’ principle. Through interpolation and fitting techniques, the E\H plane’s maximum aperture error parameter of the pyramid horn is obtained as a function of the angle and aperture electrical size. Firstly, the aperture size of the E (or H) plane is calculated with the help of the optimal gain principle. Secondly, the constraint equation of another plane is derived. Finally, the intersection of constraint equation and interpolation function, which can be solved iteratively, contains all the solution information. The general radiation patterns neglect the influence of the Huygens element factor which makes the error bigger in large design beamwidth. In this paper, through theoretical analysis and simulation experiments, two correction formulas are employed to correct the Huygens element factor’s influence on the E\H planes. Simulation experiments and measurements show that the proposed method has a smaller design error in the range of 0–60 degrees half-power beamwidth.
A Low Complexity Distributed Multitarget Detection and Tracking Algorithm
FAN Jiande, XIE Weixin, LIU Zongxiang
, doi: 10.1049/cje.2021.00.282
摘要:
In this paper, we propose a low complexity distributed approach to address the multitarget detection/tracking problem in the presence of noisy and missing data. The proposed approach consists of two components: a distributed flooding scheme for measurements exchanging among sensors and a sampling-based clustering approach for target detection/tracking from the aggregated measurements. The main advantage of the proposed approach over the prevailing Markov-Bayes-based distributed filters is that it does not require any priori information and all the information required is the measurement set from multiple sensors. A comparison of the proposed approach with the available distributed clustering approaches and the cutting edge distributed multi-Bernoulli filters that are modeled with appropriate parameters confirms the effectiveness and the reliability of the proposed approach. In this paper, we propose a low complexity distributed approach to address the multitarget detection/tracking problem in the presence of noisy and missing data. The proposed approach consists of two components: a distributed flooding scheme for measurements exchanging among sensors and a sampling-based clustering approach for target detection/tracking from the aggregated measurements. The main advantage of the proposed approach over the prevailing Markov-Bayes-based distributed filters is that it does not require any priori information and all the information required is the measurement set from multiple sensors. A comparison of the proposed approach with the available distributed clustering approaches and the cutting edge distributed multi-Bernoulli filters that are modeled with appropriate parameters confirms the effectiveness and the reliability of the proposed approach.
Intelligent Orchestrating of IoT Microservices Based on Reinforcement Learning
WU Yuqin, SHEN Congqi, CHEN Shuhan, WU Chunming, LI Shunbin, Wei Ruan
, doi: 10.1049/cje.2020.00.417
摘要:
With the recent increase in the number of Internet of Things (IoT) services, an intelligent scheduling strategy is needed to manage these services. In this paper, the problem of automatic choreography of microservices in IoT is explored. A type of reinforcement learning (RL) algorithm called TD3 is used to generate the optimal choreography policy under the framework of a softwaredefined network. The optimal policy is gradually reached during the learning procedure to achieve the goal, despite the dynamic characteristics of the network environment. The simulation results show that compared with other methods, the TD3 algorithm converges faster after a certain number of iterations, and it performs better than other non-RL algorithms by obtaining the highest reward. The TD3 algorithm can effciently adjust the traffc transmission path and provide qualified IoT services. With the recent increase in the number of Internet of Things (IoT) services, an intelligent scheduling strategy is needed to manage these services. In this paper, the problem of automatic choreography of microservices in IoT is explored. A type of reinforcement learning (RL) algorithm called TD3 is used to generate the optimal choreography policy under the framework of a softwaredefined network. The optimal policy is gradually reached during the learning procedure to achieve the goal, despite the dynamic characteristics of the network environment. The simulation results show that compared with other methods, the TD3 algorithm converges faster after a certain number of iterations, and it performs better than other non-RL algorithms by obtaining the highest reward. The TD3 algorithm can effciently adjust the traffc transmission path and provide qualified IoT services.
Recursive Feature Elimination Based Feature Selection in Modulation Classification for MIMO Systems
ZHOU Shuai, LI Tao, LI Yongzhao
, doi: 10.1049/cje.2021.00.347
摘要:
Feature-based (FB) algorithms are widely used in modulation classification due to their low complexity. As a prerequisite step of FB, feature selection can reduce the computational complexity without significant performance loss. In this paper, according to the linear separability of cumulant features, the hyperplane of the support vector machine is used to classify modulation types, and the contribution of different features is ranked through the weight vector. Then, cumulant features are selected using recursive feature elimination (RFE) to identify the modulation type employed at the transmitter. We compare the performance of the proposed algorithm with existing feature selection algorithms and analyze the complexity of all the mentioned algorithms. Simulation results verify that the proposed RFE algorithm can optimize the selection of the features to realize modulation recognition and improve identification efficiency. Feature-based (FB) algorithms are widely used in modulation classification due to their low complexity. As a prerequisite step of FB, feature selection can reduce the computational complexity without significant performance loss. In this paper, according to the linear separability of cumulant features, the hyperplane of the support vector machine is used to classify modulation types, and the contribution of different features is ranked through the weight vector. Then, cumulant features are selected using recursive feature elimination (RFE) to identify the modulation type employed at the transmitter. We compare the performance of the proposed algorithm with existing feature selection algorithms and analyze the complexity of all the mentioned algorithms. Simulation results verify that the proposed RFE algorithm can optimize the selection of the features to realize modulation recognition and improve identification efficiency.
Non-uniform Compressive Sensing Imaging based on Image Saliency
LI Hongliang, DAI Feng, ZHAO Qiang, MA Yike, CAO Juan, ZHANG Yongdong
, doi: 10.1049/cje.2019.00.028
摘要:
For more effective image sampling, Compressive Sensing (CS) imaging methods based on image saliency have been proposed in recent years. Those methods assign higher measurement rates to salient regions, but lower measurement rate to non-salient regions to improve the performance of CS imaging. However, those methods are block-based, which are difficult to apply to actual CS sampling, as each photodiode should strictly correspond to a block of the scene. In our work, we propose a non-uniform CS imaging method based on image saliency, which assigns higher measurement density to salient regions and lower density to non-salient regions, where measurement density is the number of pixels measured in a unit size. As the dimension of the signal is reduced, the quality of reconstructed image will be improved theoretically, which is confirmed by our experiments. Since the scene is sampled as a whole, our method can be easily applied to actual CS sampling. To verify the feasibility of our approach, we design and implement a hardware sampling system, which can apply our non-uniform sampling method to obtain measurements and reconstruct the images. To our best knowledge, this is the first CS hardware sampling system based on image saliency. For more effective image sampling, Compressive Sensing (CS) imaging methods based on image saliency have been proposed in recent years. Those methods assign higher measurement rates to salient regions, but lower measurement rate to non-salient regions to improve the performance of CS imaging. However, those methods are block-based, which are difficult to apply to actual CS sampling, as each photodiode should strictly correspond to a block of the scene. In our work, we propose a non-uniform CS imaging method based on image saliency, which assigns higher measurement density to salient regions and lower density to non-salient regions, where measurement density is the number of pixels measured in a unit size. As the dimension of the signal is reduced, the quality of reconstructed image will be improved theoretically, which is confirmed by our experiments. Since the scene is sampled as a whole, our method can be easily applied to actual CS sampling. To verify the feasibility of our approach, we design and implement a hardware sampling system, which can apply our non-uniform sampling method to obtain measurements and reconstruct the images. To our best knowledge, this is the first CS hardware sampling system based on image saliency.
Hyperspectral Image Super-Resolution Based on Spatial-Spectral Feature Extraction Network
LI Yanshan, CHEN Shifu, LUO Wenhan, ZHOU Li, XIE Weixin
, doi: 10.1049/cje.2021.00.081
摘要:
Constrained by physics, the spatial resolution of hyperspectral images (HSI) is low. Hyperspectral image super-resolution (HSI SR) is a task to obtain high-resolution hyperspectral images (HR HSI) from low-resolution hyperspectral images (LR HSI). Existing algorithms have the problem of losing important spectral information while improving spatial resolution. To handle this problem, a spatial-spectral feature extraction network (SSFEN) for HSI SR is proposed in this paper. It enhances the spatial resolution of the HSI while preserving the spectral information. The SSFEN is composed of three parts: spatial-spectral mapping network (SSMN), spatial reconstruction network (SRN), and spatial-spectral fusing network (SSFN). And a joint loss function with spatial and spectral constraints is designed to guide the training of the SSFEN. Experiment results show that the proposed method improves the spatial resolution of the HSI and effectively preserves the spectral information coinstantaneously. Constrained by physics, the spatial resolution of hyperspectral images (HSI) is low. Hyperspectral image super-resolution (HSI SR) is a task to obtain high-resolution hyperspectral images (HR HSI) from low-resolution hyperspectral images (LR HSI). Existing algorithms have the problem of losing important spectral information while improving spatial resolution. To handle this problem, a spatial-spectral feature extraction network (SSFEN) for HSI SR is proposed in this paper. It enhances the spatial resolution of the HSI while preserving the spectral information. The SSFEN is composed of three parts: spatial-spectral mapping network (SSMN), spatial reconstruction network (SRN), and spatial-spectral fusing network (SSFN). And a joint loss function with spatial and spectral constraints is designed to guide the training of the SSFEN. Experiment results show that the proposed method improves the spatial resolution of the HSI and effectively preserves the spectral information coinstantaneously.
Developer Cooperation Relationship and Attribute Similarity Based Community Detection in Software Ecosystem
SHEN Xin, DU Junwei, GONG Dunwei, YAO Xiangjuan
, doi: 10.1049/cje.2021.00.276
摘要:
A software ecosystem (SECO) can be described as a special complex network. Previous complex networks in an SECO have limitations in accurately reflecting the similarity between each pair of nodes. The community structure is critical towards understanding the network topology and function. Many scholars tend to adopt evolutionary optimization methods for community detection. The information adopted in previous optimization models for community detection is incomprehensive and cannot be directly applied to the problem of community detection in an SECO. Based on this, a complex network in SECOs is first built. In the network, the cooperation intensity between developers is accurately calculated, and the attribute contained by each developer is considered. A multi-objective optimization model is formulated. A community detection algorithm based on NSGA-II is employed to solve the above model. Experimental results demonstrate that the proposed method of calculating the developer cooperation intensity and our model are advantageous. A software ecosystem (SECO) can be described as a special complex network. Previous complex networks in an SECO have limitations in accurately reflecting the similarity between each pair of nodes. The community structure is critical towards understanding the network topology and function. Many scholars tend to adopt evolutionary optimization methods for community detection. The information adopted in previous optimization models for community detection is incomprehensive and cannot be directly applied to the problem of community detection in an SECO. Based on this, a complex network in SECOs is first built. In the network, the cooperation intensity between developers is accurately calculated, and the attribute contained by each developer is considered. A multi-objective optimization model is formulated. A community detection algorithm based on NSGA-II is employed to solve the above model. Experimental results demonstrate that the proposed method of calculating the developer cooperation intensity and our model are advantageous.
Convolutional Neural Networks of Whole Jujube Fruits Prediction Model Based on Multi-Spectral Imaging Method
WANG Jing, FAN Xiaofei, SHI Nan, ZHAO Zhihui, SUN Lei, SUO Xuesong
, doi: 10.1049/cje.2021.00.149
摘要:
Soluble sugar is an important index to determine the quality of jujube, and also an important factor to influence the taste of jujube. The soluble sugar content of jujube mainly depends on manual chemical measurement, which is time-consuming and labor-intensive. In this study, the feasibility of multi-spectral imaging combined with deep learning for rapid nondestructive testing of fruit internal quality was analyzed. Support vector machine regression model, partial least squares regression model and convolutional neural networks (CNNs) model were established by multi-spectral imaging method to predict the soluble sugar content of the whole jujube fruit, and the optimal model was selected to predict the content of three kinds of soluble sugar. The study showed that the sucrose prediction model of the whole jujube had the best performance after CNNs training, and the correlation coefficient of verification set was 0.88, which proved the feasibility of using CNNs for prediction of the soluble sugar content of jujube fruits. Soluble sugar is an important index to determine the quality of jujube, and also an important factor to influence the taste of jujube. The soluble sugar content of jujube mainly depends on manual chemical measurement, which is time-consuming and labor-intensive. In this study, the feasibility of multi-spectral imaging combined with deep learning for rapid nondestructive testing of fruit internal quality was analyzed. Support vector machine regression model, partial least squares regression model and convolutional neural networks (CNNs) model were established by multi-spectral imaging method to predict the soluble sugar content of the whole jujube fruit, and the optimal model was selected to predict the content of three kinds of soluble sugar. The study showed that the sucrose prediction model of the whole jujube had the best performance after CNNs training, and the correlation coefficient of verification set was 0.88, which proved the feasibility of using CNNs for prediction of the soluble sugar content of jujube fruits.
A Note for Estimation About Average Differential Entropy of Continuous Bounded Space-Time Random Field
SONG Zhanjie, ZHANG Jiaxing
, doi: 10.1049/cje.2021.00.213
摘要:
In this paper, we mainly study the discrete approximation about average differential entropy of continuous bounded space-time random field. The estimation of differential entropy on random variable is a classic problem, and there are many related studies. Space-time random field is a theoretical extension of adding random variables to space-time parameters, but studies on discrete estimation of entropy on space-time random field are relatively few. The differential entropy forms of continuous bounded space-time random field and discrete estimations are discussed, and three estimation forms of differential entropy in the case of random variables are generated in this paper. Furthermore, it is concluded that under the condition that the entropy estimation formula after space-time segmentation converges with probability 1, the average entropy in the bounded space-time region can also converge with probability 1, and three generalized entropies are verified respectively. In addition, we also carried out numerical experiments on the convergence of average entropy estimation based on parameters, and the numerical results are consistent with the theoretical results, which indicting further study of the average entropy estimation problem of space-time random fields is significant in the future. In this paper, we mainly study the discrete approximation about average differential entropy of continuous bounded space-time random field. The estimation of differential entropy on random variable is a classic problem, and there are many related studies. Space-time random field is a theoretical extension of adding random variables to space-time parameters, but studies on discrete estimation of entropy on space-time random field are relatively few. The differential entropy forms of continuous bounded space-time random field and discrete estimations are discussed, and three estimation forms of differential entropy in the case of random variables are generated in this paper. Furthermore, it is concluded that under the condition that the entropy estimation formula after space-time segmentation converges with probability 1, the average entropy in the bounded space-time region can also converge with probability 1, and three generalized entropies are verified respectively. In addition, we also carried out numerical experiments on the convergence of average entropy estimation based on parameters, and the numerical results are consistent with the theoretical results, which indicting further study of the average entropy estimation problem of space-time random fields is significant in the future.
A CMOS 4-Element Ku-Band Phased-Array Transceiver
ZHANG Xiaoning, YU Yiming, ZHAO Chenxi, LIU Huihua, WU Yunqiu, KANG Kai
, doi: 10.1049/cje.2021.00.372
摘要:
This paper presents a Ku-Band fully differential 4-element phased-array transceiver using a standard 180-nm CMOS process. Each transceiver is integrated with a 5-bit phase shifter and 4-bit attenuator for high-resolution radiation manipulation. The front-end system adopts time-division mode, and hence two low-loss T/R switches are included in each channel. At room temperature, the measured root-mean-square (RMS) phase error is less than 5.5°. Furthermore, the temperature influence on passive switched phase shifters is analyzed. Meanwhile, an extra phase-shifting cell is developed to calibrate phase error varying with the operating temperatures. With the calibration, the RMS phase error is reduced by 7° at −45 ℃, and 5.4° at 85 ℃. The RMS amplitude error is less than 0.92 dB at 15~18 GHz. In the RX mode, the tested gain is 9.6±1.1 dB at 16.5 GHz with a noise figure of 10.9 dB, and the input P1dB is −15 dBm, while the single-channel’s gain and output P1dB in the TX mode are 11.3 ± 0.4 dB and 9.4 dBm at 16.1 GHz, respectively. The whole chip occupies an area of 5 × 4.2 mm2 and the measured isolation between each two adjacent channels is lower than −23.1 dB. This paper presents a Ku-Band fully differential 4-element phased-array transceiver using a standard 180-nm CMOS process. Each transceiver is integrated with a 5-bit phase shifter and 4-bit attenuator for high-resolution radiation manipulation. The front-end system adopts time-division mode, and hence two low-loss T/R switches are included in each channel. At room temperature, the measured root-mean-square (RMS) phase error is less than 5.5°. Furthermore, the temperature influence on passive switched phase shifters is analyzed. Meanwhile, an extra phase-shifting cell is developed to calibrate phase error varying with the operating temperatures. With the calibration, the RMS phase error is reduced by 7° at −45 ℃, and 5.4° at 85 ℃. The RMS amplitude error is less than 0.92 dB at 15~18 GHz. In the RX mode, the tested gain is 9.6±1.1 dB at 16.5 GHz with a noise figure of 10.9 dB, and the input P1dB is −15 dBm, while the single-channel’s gain and output P1dB in the TX mode are 11.3 ± 0.4 dB and 9.4 dBm at 16.1 GHz, respectively. The whole chip occupies an area of 5 × 4.2 mm2 and the measured isolation between each two adjacent channels is lower than −23.1 dB.
Dual Radial-Resonant Wide Beamwidth Circular Sector Microstrip Patch Antennas
MAO Xiaohui, LU Wenjun, JI Feiyan, XING Xiuqiong, ZHU Lei
, doi: 10.1049/cje.2021.00.219
摘要:
In this article, a design approach to a radial-resonant wide beamwidth circular sector patch antenna is advanced. As properly evolved from a U-shaped dipole, a prototype magnetic dipole can be fit in the radial direction of a circular sector patch radiator, with its length set as the positive odd-integer multiples of one-quarter wavelength. In this way, multiple TM0m (m = 1, 2, …) modes resonant circular sector patch antenna with short-circuited circumference and widened E-plane beamwidth can be realized by proper excitation and perturbations. Prototype antennas are then designed and fabricated to validate the design approach. Experimental results reveal that the E-plane beamwidth of a dual-resonant antenna fabricated on air/Teflon substrate can be effectively broadened to 128°/120°, with an impedance bandwidth of 17.4%/7.1%, respectively. In both cases, the antenna heights are strictly limited to no more than 0.03-guided wavelength. It is evidently validated that the proposed approach can effectively enhance the operational bandwidth and beamwidth of a microstrip patch antenna while maintaining its inherent low profile merit. In this article, a design approach to a radial-resonant wide beamwidth circular sector patch antenna is advanced. As properly evolved from a U-shaped dipole, a prototype magnetic dipole can be fit in the radial direction of a circular sector patch radiator, with its length set as the positive odd-integer multiples of one-quarter wavelength. In this way, multiple TM0m (m = 1, 2, …) modes resonant circular sector patch antenna with short-circuited circumference and widened E-plane beamwidth can be realized by proper excitation and perturbations. Prototype antennas are then designed and fabricated to validate the design approach. Experimental results reveal that the E-plane beamwidth of a dual-resonant antenna fabricated on air/Teflon substrate can be effectively broadened to 128°/120°, with an impedance bandwidth of 17.4%/7.1%, respectively. In both cases, the antenna heights are strictly limited to no more than 0.03-guided wavelength. It is evidently validated that the proposed approach can effectively enhance the operational bandwidth and beamwidth of a microstrip patch antenna while maintaining its inherent low profile merit.
Convolution theorem associated with the QWFRFT
MEI Yinyin, FENG Qiang, GAO Xiuxiu, ZHAO Yanbo
, doi: 10.1049/cje.2021.00.225
摘要:
The quaternion windowed fractional Fourier transform (QWFRFT) is a generalized form of the quaternion fractional Fourier transform (QFRFT), which plays an important role in signal processing for the analysis of higher-dimensional signals. In this paper, we firstly introduce the two-sided quaternion windowed fractional Fourier transform (QWFRFT), and give some fundamental properties for QWFRFT. Secondly, the quaternion convolution is proposed, the relationship between the quaternion convolution and the classical convolution is also given. Based on the quaternion convolution of the QWFRFT, convolution theorems associated with the QWFRFT are studied. Thirdly, fast algorithm for QWFRFT is discussed. The complexity of QWFRFT and the quaternion windowed fractional convolution are given. The quaternion windowed fractional Fourier transform (QWFRFT) is a generalized form of the quaternion fractional Fourier transform (QFRFT), which plays an important role in signal processing for the analysis of higher-dimensional signals. In this paper, we firstly introduce the two-sided quaternion windowed fractional Fourier transform (QWFRFT), and give some fundamental properties for QWFRFT. Secondly, the quaternion convolution is proposed, the relationship between the quaternion convolution and the classical convolution is also given. Based on the quaternion convolution of the QWFRFT, convolution theorems associated with the QWFRFT are studied. Thirdly, fast algorithm for QWFRFT is discussed. The complexity of QWFRFT and the quaternion windowed fractional convolution are given.
Multi-Frequency-Ranging Positioning Algorithm for 5G OFDM Communication Systems
LI Wengang, XU Yaqin, ZHANG Chenmeng, TIAN Yiheng, LIU Mohan, HUANG Jun
, doi: 10.1049/cje.2021.00.124
摘要:
Vehicles equipped with 5th Generation(5G) wireless communication devices can exchange information with infrastructure(Vehicle to Infrastructure, V2I) to improve positioning accuracy. Vehicle location has great research value due to the problems of multipath environment and lack of Global Navigation Satellite System(GNSS) signals. This paper proposes a multi-frequency ranging method and positioning algorithm for 5G Orthogonal Frequency Division Multiplexing(OFDM) communication system. It selects specific subcarriers in the OFDM communication system to be used for transmitting ranging frames and delay observations without affecting other subcarriers used for communication. With almost no impact on communication capacity, several specific subcarriers of OFDM are used for ranging and positioning. It introduces the ranging subcarriers’ selection method and the format of the ranging frame carried by the subcarriers. The Cramero Lower Bound(CRLB) of this ranging positioning system is proved. Ranging positioning accuracy meets the requirements of vehicle location applications. The experimental simulation compares the performance with other positioning methods and proves the superiority of this system. The theory proves and simulates the relationship between ranging accuracy and channel parameters in a multipath environment. The simulation results show that the positioning accuracy about 5 cm can be achieved under the conditions of 5 GHz frequency and high signal-to-noise ratio(SNR). Vehicles equipped with 5th Generation(5G) wireless communication devices can exchange information with infrastructure(Vehicle to Infrastructure, V2I) to improve positioning accuracy. Vehicle location has great research value due to the problems of multipath environment and lack of Global Navigation Satellite System(GNSS) signals. This paper proposes a multi-frequency ranging method and positioning algorithm for 5G Orthogonal Frequency Division Multiplexing(OFDM) communication system. It selects specific subcarriers in the OFDM communication system to be used for transmitting ranging frames and delay observations without affecting other subcarriers used for communication. With almost no impact on communication capacity, several specific subcarriers of OFDM are used for ranging and positioning. It introduces the ranging subcarriers’ selection method and the format of the ranging frame carried by the subcarriers. The Cramero Lower Bound(CRLB) of this ranging positioning system is proved. Ranging positioning accuracy meets the requirements of vehicle location applications. The experimental simulation compares the performance with other positioning methods and proves the superiority of this system. The theory proves and simulates the relationship between ranging accuracy and channel parameters in a multipath environment. The simulation results show that the positioning accuracy about 5 cm can be achieved under the conditions of 5 GHz frequency and high signal-to-noise ratio(SNR).
NGD Analysis of Defected Ground and SIW-Matched Structure
GU Taochen, WAN Fayu, GE Junxiang, Lalléchère Sébastien, Rahajandraibe Wenceslas, Ravelo Blaise
, doi: 10.1049/cje.2021.00.233
摘要:
An innovative design of bandpass (BP) negative group delay (NGD) passive circuit based on defect ground structure (DGS) is developed in the present paper. The NGD DGS topology is originally built with notched cells associated with self-matched substrate waveguide elements. The DGS design method is introduced as a function of the geometrical notched and substrate integrated waveguide via elements. Then, parametric analyses based on full wave 3-D electromagnetic S-parameter simulations were considered to investigate the influence of DGS physical size effects. The design method feasibility study is validated with fully distributed microstrip circuit prototype. Significant BP NGD function performances were validated with 3-D simulations and measurements with −1.69 ns NGD value around 2 GHz center frequency over 33.7 MHz NGD bandwidth with insertion loss better than 4 dB and reflection loss better than 40 dB. An innovative design of bandpass (BP) negative group delay (NGD) passive circuit based on defect ground structure (DGS) is developed in the present paper. The NGD DGS topology is originally built with notched cells associated with self-matched substrate waveguide elements. The DGS design method is introduced as a function of the geometrical notched and substrate integrated waveguide via elements. Then, parametric analyses based on full wave 3-D electromagnetic S-parameter simulations were considered to investigate the influence of DGS physical size effects. The design method feasibility study is validated with fully distributed microstrip circuit prototype. Significant BP NGD function performances were validated with 3-D simulations and measurements with −1.69 ns NGD value around 2 GHz center frequency over 33.7 MHz NGD bandwidth with insertion loss better than 4 dB and reflection loss better than 40 dB.
AttentionSplice: An interpretable multi-head self-attention based hybrid deep learning model in splice site prediction
YAN Wenjing, ZHANG Baoyu, ZUO Min, ZHANG Qingchuan, WANG Hong, DA Mao
, doi: 10.1049/cje.2021.00.221
摘要:
Pre-mRNA splicing is an essential procedure for gene transcription. Through the cutting of introns and exons, the DNA sequence can be decoded into different proteins related to different biological functions. The cutting boundaries are defined by the donor and acceptor splice sites. Characterizing the nucleotides patterns in detecting splice sites is sophisticated and challenges the conventional methods. Recently, the deep learning frame has been introduced in predicting splice sites and exhibits high performance. It extracts high dimension features from the DNA sequence automatically rather than infers the splice sites with prior knowledge of the relationships, dependencies, and characteristics of nucleotides in the DNA sequence. This paper proposes the AttentionSplice model, a hybrid construction combined with multi-head self-attention, convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) network. The performance of AttentionSplice is evaluated on the Homo sapiens (Human) and Caenorhabditis Elegans (Worm) datasets. Our model outperforms state-of-the-art models in the classification of splice sites. To provide interpretability of AttentionSplice models, we extract important positions and key motifs which could be essential for splice site detection through the attention learned by the model. Our result could offer novel insights into the underlying biological roles and molecular mechanisms of gene expression. Pre-mRNA splicing is an essential procedure for gene transcription. Through the cutting of introns and exons, the DNA sequence can be decoded into different proteins related to different biological functions. The cutting boundaries are defined by the donor and acceptor splice sites. Characterizing the nucleotides patterns in detecting splice sites is sophisticated and challenges the conventional methods. Recently, the deep learning frame has been introduced in predicting splice sites and exhibits high performance. It extracts high dimension features from the DNA sequence automatically rather than infers the splice sites with prior knowledge of the relationships, dependencies, and characteristics of nucleotides in the DNA sequence. This paper proposes the AttentionSplice model, a hybrid construction combined with multi-head self-attention, convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) network. The performance of AttentionSplice is evaluated on the Homo sapiens (Human) and Caenorhabditis Elegans (Worm) datasets. Our model outperforms state-of-the-art models in the classification of splice sites. To provide interpretability of AttentionSplice models, we extract important positions and key motifs which could be essential for splice site detection through the attention learned by the model. Our result could offer novel insights into the underlying biological roles and molecular mechanisms of gene expression.
A Novel Re-weighted CTC Loss for Data Imbalance in Speech Keyword Spotting
LAN Xiaotian, HE Qianhua, YAN Haikang, LI Yanxiong
, doi: 10.1049/cje.2021.00.198
摘要:
Speech keyword spotting system is a critical component of human-computer interfaces. And Connectionist temporal classifier (CTC) has been proven to be an effective tool for that task. However, the standard training process of speech keyword spotting faces a data imbalance issue where positive samples are usually far less than negative samples. Numerous easy-training negative examples overwhelm the training, resulting in a degenerated model. To deal with it, this paper tries to reshape the standard CTC loss and proposes a novel re-weighted CTC loss. It evaluates the sample importance by its number of detection errors during training and automatically down-weights the contribution of easy examples, the majorities of which are negatives, making the training focus on samples deserving more training. The proposed method can alleviate the imbalance naturally and make use of all available data efficiently. Evaluation on several sets of keywords selected from AISHELL-1 and AISHELL-2 achieves 16%—38% relative reductions in false rejection rates over standard CTC loss at 0.5 false alarms per keyword per hour in experiments. Speech keyword spotting system is a critical component of human-computer interfaces. And Connectionist temporal classifier (CTC) has been proven to be an effective tool for that task. However, the standard training process of speech keyword spotting faces a data imbalance issue where positive samples are usually far less than negative samples. Numerous easy-training negative examples overwhelm the training, resulting in a degenerated model. To deal with it, this paper tries to reshape the standard CTC loss and proposes a novel re-weighted CTC loss. It evaluates the sample importance by its number of detection errors during training and automatically down-weights the contribution of easy examples, the majorities of which are negatives, making the training focus on samples deserving more training. The proposed method can alleviate the imbalance naturally and make use of all available data efficiently. Evaluation on several sets of keywords selected from AISHELL-1 and AISHELL-2 achieves 16%—38% relative reductions in false rejection rates over standard CTC loss at 0.5 false alarms per keyword per hour in experiments.
Explainable Business Process Remaining Time Prediction using Reachability Graph
CAO Rui, ZENG Qingtian, NI Weijian, LU Faming, LIU Cong, DUAN Hua
, doi: 10.1049/cje.2021.00.170
摘要:
With the recent advances in the field of deep learning, an increasing number of deep neural networks have been applied to business process prediction tasks, remaining time prediction, to obtain more accurate predictive results. However, existing time prediction methods based on deep learning have poor interpretability, an explainable business process remaining time prediction method is proposed using reachability graph, which consists of prediction model construction and visualization. For prediction models, a Petri net is mined and the reachability graph is constructed to obtain the transition occurrence vector. Then, prefixes and corresponding suffixes are generated to cluster into different transition partitions according to transition occurrence vector. Next, the bidirectional recurrent neural network with attention is applied to each transition partition to encode the (trace) prefixes, and the deep transfer learning between different transition partitions is performed. For the visualization of prediction models, the evaluation values are added to the sub-processes of Petri net to realize the visualization of the prediction models. Finally, the proposed method is validated by publicly available event logs. With the recent advances in the field of deep learning, an increasing number of deep neural networks have been applied to business process prediction tasks, remaining time prediction, to obtain more accurate predictive results. However, existing time prediction methods based on deep learning have poor interpretability, an explainable business process remaining time prediction method is proposed using reachability graph, which consists of prediction model construction and visualization. For prediction models, a Petri net is mined and the reachability graph is constructed to obtain the transition occurrence vector. Then, prefixes and corresponding suffixes are generated to cluster into different transition partitions according to transition occurrence vector. Next, the bidirectional recurrent neural network with attention is applied to each transition partition to encode the (trace) prefixes, and the deep transfer learning between different transition partitions is performed. For the visualization of prediction models, the evaluation values are added to the sub-processes of Petri net to realize the visualization of the prediction models. Finally, the proposed method is validated by publicly available event logs.
A Novel Sampling Method Based on Neighborhood Weighted for Imbalanced Datasets
GUANG Mingjian, YAN Chungang, LIU Guanjun, WANG Junli, JIANG Changjun
, doi: 10.1049/cje.2021.00.121
摘要:
The weighted sampling methods based on k-nearest neighbors have been demonstrated to be effective in solving the class imbalance problem. However, they usually ignore the positional relationship between a sample and the heterogeneous samples in its neighborhood when calculating sample weight. This paper proposes a novel neighborhood-weighted based (NWBBagging) sampling method to improve the Bagging algorithm’s performance on imbalanced datasets. It considers the positional relationship between the center sample and the heterogeneous samples in its neighborhood when identifying critical samples. And a parameter reduction method is proposed and combined into the ensemble learning framework, which reduces the parameters and increases the classifier’s diversity. We compare NWBBagging with some stateof-the-art ensemble learning algorithms on 34 imbalanced datasets, and the result shows that NWBBagging achieves better performance. The weighted sampling methods based on k-nearest neighbors have been demonstrated to be effective in solving the class imbalance problem. However, they usually ignore the positional relationship between a sample and the heterogeneous samples in its neighborhood when calculating sample weight. This paper proposes a novel neighborhood-weighted based (NWBBagging) sampling method to improve the Bagging algorithm’s performance on imbalanced datasets. It considers the positional relationship between the center sample and the heterogeneous samples in its neighborhood when identifying critical samples. And a parameter reduction method is proposed and combined into the ensemble learning framework, which reduces the parameters and increases the classifier’s diversity. We compare NWBBagging with some stateof-the-art ensemble learning algorithms on 34 imbalanced datasets, and the result shows that NWBBagging achieves better performance.
MADRL-based 3D Deployment and User Association of Cooperative mmWave Aerial Base Stations for Capacity Enhancement
ZHAO Yikun, ZHOU Fanqin, FENG Lei, LI Wenjing, YU Peng
, doi: 10.1049/cje.2021.00.327
摘要:
Although millimeter-wave (mmWave) aerial base station (mAeBS) gains rich wireless capacity, it is technically difficult for deploying several mAeBSs to solve the surge of data traffic in hotspots when considering the amount of interference from neighboring mAeBS. This paper introduces coordinated multiple points transmission (CoMP) into the mAeBS-assisted network for capacity enhancement and designs a two-timescale approach for 3D deployment and user association of cooperative mAeBSs. Specially, an affinity propagation clustering (APC)-based mAeBS-user cooperative association scheme is conducted on a large timescale followed by modeling the capacity evaluation, and a deployment algorithm based on multi-agent deep deterministic policy gradient (MADDPG) is designed on the small timescale to obtain the 3D position of mAeBS in a distributed manner. Simulation results demonstrate that the proposed approach has significant throughput gains over conventional schemes without CoMP, and the MADDPG is more efficient than centralized DRL algorithms in deriving the solution. Although millimeter-wave (mmWave) aerial base station (mAeBS) gains rich wireless capacity, it is technically difficult for deploying several mAeBSs to solve the surge of data traffic in hotspots when considering the amount of interference from neighboring mAeBS. This paper introduces coordinated multiple points transmission (CoMP) into the mAeBS-assisted network for capacity enhancement and designs a two-timescale approach for 3D deployment and user association of cooperative mAeBSs. Specially, an affinity propagation clustering (APC)-based mAeBS-user cooperative association scheme is conducted on a large timescale followed by modeling the capacity evaluation, and a deployment algorithm based on multi-agent deep deterministic policy gradient (MADDPG) is designed on the small timescale to obtain the 3D position of mAeBS in a distributed manner. Simulation results demonstrate that the proposed approach has significant throughput gains over conventional schemes without CoMP, and the MADDPG is more efficient than centralized DRL algorithms in deriving the solution.
A New Edge Perturbation Mechanism for Privacy-Preserving Data Collection in IoT
CHEN Qiuling, YE Ayong, ZHANG Qiang, HUANG Chuan
, doi: 10.1049/cje.2021.00.411
摘要:
A growing amount of data containing the sensitive information of users is being collected by emerging smart connected devices to the center server in Internet of Things (IoT) era, which raises serious privacy concerns for millions of users. However, existing perturbation methods are not effective because of increased disclosure risk and reduced data utility, especially for small data sets. To overcome this issue, we propose a new edge perturbation mechanism based on the concept of global sensitivity to protect the sensitive information in IoT data collection. The edge server is used to mask users’ sensitive data, which can not only avoid the data leakage caused by centralized perturbation, but also achieve better data utility than local perturbation. In addition, we present a global noise generation algorithm based on edge perturbation. Each edge server utilizes the global noise generated by the center server to perturb users’ sensitive data. It can minimize the disclosure risk while ensuring that the results of commonly performed statistical analyses are identical and equal for both the raw and the perturbed data. Finally, theoretical and experimental evaluations indicate that the proposed mechanism is private and accurate for small data sets. A growing amount of data containing the sensitive information of users is being collected by emerging smart connected devices to the center server in Internet of Things (IoT) era, which raises serious privacy concerns for millions of users. However, existing perturbation methods are not effective because of increased disclosure risk and reduced data utility, especially for small data sets. To overcome this issue, we propose a new edge perturbation mechanism based on the concept of global sensitivity to protect the sensitive information in IoT data collection. The edge server is used to mask users’ sensitive data, which can not only avoid the data leakage caused by centralized perturbation, but also achieve better data utility than local perturbation. In addition, we present a global noise generation algorithm based on edge perturbation. Each edge server utilizes the global noise generated by the center server to perturb users’ sensitive data. It can minimize the disclosure risk while ensuring that the results of commonly performed statistical analyses are identical and equal for both the raw and the perturbed data. Finally, theoretical and experimental evaluations indicate that the proposed mechanism is private and accurate for small data sets.
Differential Analysis of ARX Block Ciphers Based on an Improved Genetic Algorithm
KANG Man, LI Yongqiang, JIAO Lin, WANG Mingsheng
, doi: 10.1049/cje.2021.00.415
摘要:
Differential cryptanalysis is one of the most critical analysis methods to evaluate the security strength of cryptographic algorithms. This paper first applies the genetic algorithm to search for differential characteristics in differential cryptanalysis. A new algorithm is proposed as the fitness function to generate a high-probability differential characteristic from a given input difference. Based on the differential of the differential characteristic found by genetic algorithm, Boolean satisfiability (SAT) is used to search all its differential characteristics to calculate the exact differential probability. In addition, a penalty-like function is also proposed to guide the search direction for the application of the stochastic algorithm to differential cryptanalysis. Our new automated cryptanalysis method is applied to SPECK32 and SPECK48. As a result, the 10-round differential probability of SPECK32 is improved to 2−30.34, and a 12-round differential of SPECK48 with differential probability 2−46.78 is achieved. Furthermore, the corresponding differential attacks are also performed. The experimental results show our method’s validity and outstanding performance in differential cryptanalysis. Differential cryptanalysis is one of the most critical analysis methods to evaluate the security strength of cryptographic algorithms. This paper first applies the genetic algorithm to search for differential characteristics in differential cryptanalysis. A new algorithm is proposed as the fitness function to generate a high-probability differential characteristic from a given input difference. Based on the differential of the differential characteristic found by genetic algorithm, Boolean satisfiability (SAT) is used to search all its differential characteristics to calculate the exact differential probability. In addition, a penalty-like function is also proposed to guide the search direction for the application of the stochastic algorithm to differential cryptanalysis. Our new automated cryptanalysis method is applied to SPECK32 and SPECK48. As a result, the 10-round differential probability of SPECK32 is improved to 2−30.34, and a 12-round differential of SPECK48 with differential probability 2−46.78 is achieved. Furthermore, the corresponding differential attacks are also performed. The experimental results show our method’s validity and outstanding performance in differential cryptanalysis.
Attention Guided Enhancement Network for Weakly Supervised Semantic Segmentation
ZHANG Zhe, WANG Bilin, YU Zhezhou, ZHAO Fengzhi
, doi: 10.1049/cje.2021.00.230
摘要:
Weakly supervised semantic segmentation using just image-level labels is critical since it alleviates the need for expensive pixel-level labels. Most cutting-edge methods adopt two-step solutions that learn to produce pseudo-ground-truth using only image-level labels and then train off-the-shelf fully supervised semantic segmentation network with these pseudo labels. Although these methods have made significant progress, they also increase the complexity of the model and training. In this paper, we propose a one-step approach for weakly supervised image semantic segmentation—Attention guided enhancement network (AGEN), which produces pseudo-pixel-level labels under the supervision of image-level labels and trains the network to generate segmentation masks in an end-to-end manner. Particularly, we employ Class activation maps (CAM) produced by different layers of the classification branch to guide the segmentation branch to learn spatial and semantic information. However, the CAM produced by the lower layer can capture the complete object region but with many noises. Thus, the self-attention module is proposed to enhance object regions adaptively and suppress irrelevant object regions, further boosting the segmentation performance. Experiments on the Pascal VOC 2012 dataset show that the performance of AGEN outperforms other state-of-art weakly supervised semantic segmentation with only image-level labels. Weakly supervised semantic segmentation using just image-level labels is critical since it alleviates the need for expensive pixel-level labels. Most cutting-edge methods adopt two-step solutions that learn to produce pseudo-ground-truth using only image-level labels and then train off-the-shelf fully supervised semantic segmentation network with these pseudo labels. Although these methods have made significant progress, they also increase the complexity of the model and training. In this paper, we propose a one-step approach for weakly supervised image semantic segmentation—Attention guided enhancement network (AGEN), which produces pseudo-pixel-level labels under the supervision of image-level labels and trains the network to generate segmentation masks in an end-to-end manner. Particularly, we employ Class activation maps (CAM) produced by different layers of the classification branch to guide the segmentation branch to learn spatial and semantic information. However, the CAM produced by the lower layer can capture the complete object region but with many noises. Thus, the self-attention module is proposed to enhance object regions adaptively and suppress irrelevant object regions, further boosting the segmentation performance. Experiments on the Pascal VOC 2012 dataset show that the performance of AGEN outperforms other state-of-art weakly supervised semantic segmentation with only image-level labels.
Vector Memory-Access Shuffle Fused Instructions for FFT-like Algorithms
LIU Sheng, YUAN Bo, GUO Yang, SUN Haiyan, JIANG Zekun
, doi: 10.1049/cje.2021.00.401
摘要:
The shuffle operations are the bottleneck when mapping the FFT-like algorithms to the vector SIMD architectures. We propose six (three pairs) innovative vector memory-access shuffle fused instructions,which have been proved mathematically. Together with the proposed modified binary-exchange method,the innovative instructions can efficiently address the bottleneck problem for DIF/DIT radix-2/4 FFT-like algorithms,reach a performance improvement by 17.9%~111.2% and reduce the code size by 5.4%~39.8%.Besides,the proposed instructions fit some hybrid-radix FFTs and are suitable for the terms of the initial or result data placement for general algorithms. The software and hardware cost of the proposed instructions is moderate. The shuffle operations are the bottleneck when mapping the FFT-like algorithms to the vector SIMD architectures. We propose six (three pairs) innovative vector memory-access shuffle fused instructions,which have been proved mathematically. Together with the proposed modified binary-exchange method,the innovative instructions can efficiently address the bottleneck problem for DIF/DIT radix-2/4 FFT-like algorithms,reach a performance improvement by 17.9%~111.2% and reduce the code size by 5.4%~39.8%.Besides,the proposed instructions fit some hybrid-radix FFTs and are suitable for the terms of the initial or result data placement for general algorithms. The software and hardware cost of the proposed instructions is moderate.
Cryptanalysis of Full-Round Magpie Block Cipher
YANG Yunxiao, SUN Bing, LIU Guoqiang
, doi: 10.1049/cje.2021.00.209
摘要:
\begin{document}${\textsf{Magpie}}$\end{document} is a lightweight block cipher proposed by Li et al. at Acta Electronica Sinica 2017. It adopts an SPN structure with a block size of 64 bits and the key size of 96 bits, respectively. To achieve the consistency of the encryption and decryption, which is both hardware and software friendly, 16 bits of the key are used as control signals to select S-boxes and another 16 bits of the key are used to determine the order of the operations. As the designers claimed, the security might be improved as different keys generate different ciphers. This paper analyzes the security of \begin{document}${\textsf{Magpie}}$\end{document}, studies the difference propagation of \begin{document}${\textsf{Magpie}}$\end{document}, and finally finds that the cipher has a set of \begin{document}$ 2^{80} $\end{document} weak keys which makes the full-round encryption weak, and corrects the lower bound of the number of active S-boxes to 10 instead of 25 proposed by the designers. In the weak key model, the security of the cipher is reduced by the claimed \begin{document}$ 2^{80} $\end{document} to only \begin{document}$ 4\times2^{16} $\end{document}. ${\textsf{Magpie}}$ is a lightweight block cipher proposed by Li et al. at Acta Electronica Sinica 2017. It adopts an SPN structure with a block size of 64 bits and the key size of 96 bits, respectively. To achieve the consistency of the encryption and decryption, which is both hardware and software friendly, 16 bits of the key are used as control signals to select S-boxes and another 16 bits of the key are used to determine the order of the operations. As the designers claimed, the security might be improved as different keys generate different ciphers. This paper analyzes the security of ${\textsf{Magpie}}$, studies the difference propagation of ${\textsf{Magpie}}$, and finally finds that the cipher has a set of $ 2^{80} $ weak keys which makes the full-round encryption weak, and corrects the lower bound of the number of active S-boxes to 10 instead of 25 proposed by the designers. In the weak key model, the security of the cipher is reduced by the claimed $ 2^{80} $ to only $ 4\times2^{16} $.
Recover the Secret Components in a ForkCipher
HOU Tao, ZHANG Jiyan, CUI Ting
, doi: 10.1049/cje.2021.00.368
摘要:
Recently, a new cryptographic primitive has been proposed called \begin{document}$ \texttt{Forkciphers} $\end{document}. This paper aims at proposing new generic cryptanalysis against such constructions. We give a generic method to apply existing decompositions againt the underlying block cipher \begin{document}${\cal{{E}}}^r$\end{document} on the forking variant \begin{document}$\texttt{Fork}{\cal{E}}$\end{document}-(r-1)-r\begin{document}$_0$\end{document}-(r+1-r\begin{document}$_0$\end{document}). As application, we consider the security of \begin{document}$ \texttt{ForkSPN} $\end{document} and \begin{document}$ \texttt{ForkFN} $\end{document} with secret inner functions. We provide a generic attack against \begin{document}$ \texttt{ForkSPN} $\end{document}-2-r\begin{document}$_0$\end{document}-(4-r\begin{document}$_0$\end{document}), which is based on the decomposition of \begin{document}$ \texttt{SASAS} $\end{document}. Also we extend the decomposition of Biryukov et al. against Feistel networks to get all the unknown round functions in \begin{document}$ \texttt{ForkFN} $\end{document}-r-r\begin{document}$_0$\end{document}-r\begin{document}$_1$\end{document} for r\begin{document}$\leq$\end{document}6 and r\begin{document}$_0$\end{document}+r\begin{document}$_1$\end{document}\begin{document}$\leq$\end{document}8. Therefore, compared with the original block cipher, the forking version requires more iteration rounds to resist the recovery attack. Recently, a new cryptographic primitive has been proposed called $ \texttt{Forkciphers} $. This paper aims at proposing new generic cryptanalysis against such constructions. We give a generic method to apply existing decompositions againt the underlying block cipher ${\cal{{E}}}^r$ on the forking variant $\texttt{Fork}{\cal{E}}$-(r-1)-r$_0$-(r+1-r$_0$). As application, we consider the security of $ \texttt{ForkSPN} $ and $ \texttt{ForkFN} $ with secret inner functions. We provide a generic attack against $ \texttt{ForkSPN} $-2-r$_0$-(4-r$_0$), which is based on the decomposition of $ \texttt{SASAS} $. Also we extend the decomposition of Biryukov et al. against Feistel networks to get all the unknown round functions in $ \texttt{ForkFN} $-r-r$_0$-r$_1$ for r$\leq$6 and r$_0$+r$_1$$\leq$8. Therefore, compared with the original block cipher, the forking version requires more iteration rounds to resist the recovery attack.
MIMO Radar Transmit-Receive Design for Extended Target Detection against Signal-Dependent Interference
YAO Yu, LI Yanjie, LI Zeqing, WU Lenan, LIU Haitao
, doi: 10.1049/cje.2021.00.140
摘要:
Assuming unknown knowledge of Target impulse response (TIR), this paper deals with the joint design of Multiple-input multiple-output (MIMO) Space-time transmit code (STTC) and Space-time receive filter (STRF) for the detection of extended targets in the presence of signal-dependent interference. To enhance the detection performance of extended targets for MIMO radar, we consider transmit-receive system optimization to maximize the worst-case Signal to interference plus noise ratio (SINR) at the output of the STRF array. The problem is formulated in terms of a non-convex max-min quadratic fractional optimization program. Relying on an appropriate reformulation, we present an alternate optimization technique which monotonically increases the SINR value and converges to a stationary point. All iterations of the procedure, involve both a convex and a max-min quadratic fractional programming problem which is globally solved resorting to the generalized Dinkelbachos process with a polynomial computational complexity. In addition, resorting to several mathematical manipulations, the original problem is transformed into an equivalent convex problem, which can also be globally solved via interior-point methods. Finally, the effectiveness of two optimization design procedures is demonstrated through experimental results, underlining the performance enhancement offered by robust joint design methods. Assuming unknown knowledge of Target impulse response (TIR), this paper deals with the joint design of Multiple-input multiple-output (MIMO) Space-time transmit code (STTC) and Space-time receive filter (STRF) for the detection of extended targets in the presence of signal-dependent interference. To enhance the detection performance of extended targets for MIMO radar, we consider transmit-receive system optimization to maximize the worst-case Signal to interference plus noise ratio (SINR) at the output of the STRF array. The problem is formulated in terms of a non-convex max-min quadratic fractional optimization program. Relying on an appropriate reformulation, we present an alternate optimization technique which monotonically increases the SINR value and converges to a stationary point. All iterations of the procedure, involve both a convex and a max-min quadratic fractional programming problem which is globally solved resorting to the generalized Dinkelbachos process with a polynomial computational complexity. In addition, resorting to several mathematical manipulations, the original problem is transformed into an equivalent convex problem, which can also be globally solved via interior-point methods. Finally, the effectiveness of two optimization design procedures is demonstrated through experimental results, underlining the performance enhancement offered by robust joint design methods.
LBA-ECA Load Balancing Algorithm Based on Weighted Bipartite Graph for Edge Computing
SHAO Sisi, LIU Shangdong, LI Kui, YOU Shuai, QIU Huajie, YAO Xiaoliang, JI Yimu
, doi: 10.1049/cje.2021.00.289
摘要:
Compared with cloud computing environment, edge computing has many choices of service providers due to different deployment environments. The flexibility of edge computing makes the environment more complex. The current edge computing architecture has the problems of scattered computing resources and limited resources of single computing node. When the edge node carries too many task requests, the makespan of the task will be delayed. We propose a load balancing algorithm based on weighted bipartite graph for edge computing (LBA-EC), which makes full use of network edge resources, reduces user delay, and improves user service experience. The algorithm is divided into two phases for task scheduling. In the first phase, the tasks are matched to different edge servers. In the second phase, the tasks are optimally allocated to different containers in the edge server to execute according to the two indicators of energy consumption and completion time. The simulations and experimental results show that our algorithm can effectively map all tasks to available resources with a shorter completion time. Compared with cloud computing environment, edge computing has many choices of service providers due to different deployment environments. The flexibility of edge computing makes the environment more complex. The current edge computing architecture has the problems of scattered computing resources and limited resources of single computing node. When the edge node carries too many task requests, the makespan of the task will be delayed. We propose a load balancing algorithm based on weighted bipartite graph for edge computing (LBA-EC), which makes full use of network edge resources, reduces user delay, and improves user service experience. The algorithm is divided into two phases for task scheduling. In the first phase, the tasks are matched to different edge servers. In the second phase, the tasks are optimally allocated to different containers in the edge server to execute according to the two indicators of energy consumption and completion time. The simulations and experimental results show that our algorithm can effectively map all tasks to available resources with a shorter completion time.
Combination for Conflicting Interval-Valued Belief Structures with CSUI-DST Method
LI Shuangming, GUAN Xin, YI Xiao, SUN Guidong
, doi: 10.1049/cje.2021.00.214
摘要:
Since the basic probability of an interval-valued belief structure (IBS) is assigned as interval number, its combination becomes difficult. Especially, when dealing with highly conflicting IBSs, most of the existing combination methods may cause counter-intuitive results, which can bring extra heavy computational burden due to nonlinear optimization model, and lose the good property of associativity and commutativity in Dempster-Shafer theory (DST). To address these problems, a novel conflicting IBSs combination method named CSUI (conflict, similarity, uncertainty, intuitionistic fuzzy sets)-DST method is proposed by introducing a similarity measurement to measure the degree of conflict among IBSs, and an uncertainty measurement to measure the degree of discord, non-specificity and fuzziness of IBSs. Considering these two measures at the same time, the weight of each IBS is determined according to the modified reliability degree. From the perspective of intuitionistic fuzzy sets, we propose the weighted average IBSs combination rule by the addition and number multiplication operators. The effectiveness and rationality of this combination method are validated with two numerical examples and its application in target recognition. Since the basic probability of an interval-valued belief structure (IBS) is assigned as interval number, its combination becomes difficult. Especially, when dealing with highly conflicting IBSs, most of the existing combination methods may cause counter-intuitive results, which can bring extra heavy computational burden due to nonlinear optimization model, and lose the good property of associativity and commutativity in Dempster-Shafer theory (DST). To address these problems, a novel conflicting IBSs combination method named CSUI (conflict, similarity, uncertainty, intuitionistic fuzzy sets)-DST method is proposed by introducing a similarity measurement to measure the degree of conflict among IBSs, and an uncertainty measurement to measure the degree of discord, non-specificity and fuzziness of IBSs. Considering these two measures at the same time, the weight of each IBS is determined according to the modified reliability degree. From the perspective of intuitionistic fuzzy sets, we propose the weighted average IBSs combination rule by the addition and number multiplication operators. The effectiveness and rationality of this combination method are validated with two numerical examples and its application in target recognition.
Quantum Attacks on Type-3 Generalized Feistel Scheme and Unbalanced Feistel Scheme with Expanding Functions
ZHANG Zhongya, WU Wenling, SUI Han, WANG Bolin
, doi: 10.1049/cje.2021.00.294
摘要:
Quantum algorithms are raising concerns in the field of cryptography all over the world. A growing number of symmetric cryptography algorithms have been attacked in the quantum setting. Type-3 generalized Feistel scheme (GFS) and unbalanced Feistel scheme with expanding functions (UFS-E) are common symmetric cryptography schemes, which are often used in cryptographic analysis and design. We propose quantum attacks on the two Feistel schemes. For \begin{document}$ d $\end{document}-branch Type-3 GFS and UFS-E, we propose distinguishing attacks on \begin{document}$(d+1)$\end{document}-round Type-3 GFS and UFS-E in polynomial time in the quantum chosen plaintext attack (qCPA) setting. We propose key recovery by applying Grover's algorithm and Simon's algorithm. For \begin{document}$ r $\end{document}-round \begin{document}$ d $\end{document}-branch Type-3 GFS with \begin{document}$ k $\end{document}-bit length subkey, the complexity is \begin{document}$O({2^{(d - 1)(r - d - 1)k/2}})$\end{document} for \begin{document}$r\ge d + 2$\end{document}. The result is better than that based on exhaustive search by a factor \begin{document}${2^{({d^2} - 1)k/2}}$\end{document}. For \begin{document}$ r $\end{document}-round \begin{document}$ d $\end{document}-branch UFS-E, the attack complexity is \begin{document}$O({2^{(r - d - 1)(r - d)k/4}})$\end{document} for \begin{document}$d + 2 \le r \le 2d$\end{document}, and \begin{document}$O({2^{(d - 1)(2r - 3d)k/4}})$\end{document} for \begin{document}$r > 2d$\end{document}. The results are better than those based on exhaustive search by factors \begin{document}${2^{(4rd - {d^2} - d - {r^2} - r)k/4}}$\end{document} and \begin{document}${2^{3(d - 1)dk/4}}$\end{document} in the quantum setting, respectively. Quantum algorithms are raising concerns in the field of cryptography all over the world. A growing number of symmetric cryptography algorithms have been attacked in the quantum setting. Type-3 generalized Feistel scheme (GFS) and unbalanced Feistel scheme with expanding functions (UFS-E) are common symmetric cryptography schemes, which are often used in cryptographic analysis and design. We propose quantum attacks on the two Feistel schemes. For $ d $-branch Type-3 GFS and UFS-E, we propose distinguishing attacks on $(d+1)$-round Type-3 GFS and UFS-E in polynomial time in the quantum chosen plaintext attack (qCPA) setting. We propose key recovery by applying Grover's algorithm and Simon's algorithm. For $ r $-round $ d $-branch Type-3 GFS with $ k $-bit length subkey, the complexity is $O({2^{(d - 1)(r - d - 1)k/2}})$ for $r\ge d + 2$. The result is better than that based on exhaustive search by a factor ${2^{({d^2} - 1)k/2}}$. For $ r $-round $ d $-branch UFS-E, the attack complexity is $O({2^{(r - d - 1)(r - d)k/4}})$ for $d + 2 \le r \le 2d$, and $O({2^{(d - 1)(2r - 3d)k/4}})$ for $r > 2d$. The results are better than those based on exhaustive search by factors ${2^{(4rd - {d^2} - d - {r^2} - r)k/4}}$ and ${2^{3(d - 1)dk/4}}$ in the quantum setting, respectively.
Learning to Combine Answer Boundary Detection and Answer Re-ranking for Phrase-Indexed Question Answering
WEN Liang, SHI Haibo, ZHANG Xiaodong, SUN Xin, WEI Xiaochi, WANG Junfeng, CHENG Zhicong, YIN Dawei, WANG Xiaolin, LUO Yingwei, WANG Houfeng
, doi: 10.1049/cje.2021.00.079
摘要:
Phrase-indexed question answering (PIQA) seeks to improve the inference speed of question answering (QA) models by enforcing complete independence of the document encoder from the question encoder, and it shows that the constrained model can achieve significant efficiency at the cost of its accuracy. In this paper, we aim to build a model under the PIQA constraint while reducing its accuracy gap with the unconstrained QA models. We propose a novel framework—AnsDR, which consists of an answer boundary detector (AnsD) and an answer candidate ranker (AnsR). More specifically, AnsD is a QA model under the PIQA architecture and it is designed to identify the rough answer boundaries; and AnsR is a lightweight ranking model to finely re-rank the potential candidates without losing the efficiency. We perform the extensive experiments on public datasets. The experimental results show that the proposed method achieves the state of the art on the PIQA task. Phrase-indexed question answering (PIQA) seeks to improve the inference speed of question answering (QA) models by enforcing complete independence of the document encoder from the question encoder, and it shows that the constrained model can achieve significant efficiency at the cost of its accuracy. In this paper, we aim to build a model under the PIQA constraint while reducing its accuracy gap with the unconstrained QA models. We propose a novel framework—AnsDR, which consists of an answer boundary detector (AnsD) and an answer candidate ranker (AnsR). More specifically, AnsD is a QA model under the PIQA architecture and it is designed to identify the rough answer boundaries; and AnsR is a lightweight ranking model to finely re-rank the potential candidates without losing the efficiency. We perform the extensive experiments on public datasets. The experimental results show that the proposed method achieves the state of the art on the PIQA task.
Internet of Brain, Thought, Thinking, and Creation
ZHANG Zhimin, YIN Rui, NING Huansheng
, doi: 10.1049/cje.2021.00.236
摘要:
Thinking space came into being with the emergence of human civilization. With the emergence and development of cyberspace, the interaction between those two spaces began to take place. In the collision of thinking and technology, new changes have taken place in both thinking space and cyberspace. To this end, this paper divides the current integration and development of thinking space and cyberspace into three stages, namely Internet of brain (IoB), Internet of thought (IoTh), and Internet of thinking (IoTk). At each stage, the contents and technologies to achieve convergence and connection of spaces are discussed. Besides, the Internet of creation (IoC) is proposed to represent the future development of thinking space and cyberspace. Finally, a series of open issues are raised, and they will become thorny factors in the development of the IoC stage. Thinking space came into being with the emergence of human civilization. With the emergence and development of cyberspace, the interaction between those two spaces began to take place. In the collision of thinking and technology, new changes have taken place in both thinking space and cyberspace. To this end, this paper divides the current integration and development of thinking space and cyberspace into three stages, namely Internet of brain (IoB), Internet of thought (IoTh), and Internet of thinking (IoTk). At each stage, the contents and technologies to achieve convergence and connection of spaces are discussed. Besides, the Internet of creation (IoC) is proposed to represent the future development of thinking space and cyberspace. Finally, a series of open issues are raised, and they will become thorny factors in the development of the IoC stage.
Two Jacobi-like algorithms for the general joint diagonalization problem with applications to blind source separation
CHENG Guanghui, MIAO Jifei, LI Wenrui
, doi: 10.1049/cje.2019.00.102
摘要:
We consider the general problem of the approximate joint diagonalization of a set of non-Hermitian matrices. This problem mainly arises in the data model of the joint blind source separation for two datasets. Based on a special parameterization of the two diagonalizing matrices and on adapted approximations of the classical cost function, we establish two Jacobi-like algorithms. They may serve for the canonical polyadic decomposition (CPD) of a third-order tensor, and in some scenarios they can outperform traditional CPD methods. Simulation results demonstrate the competitive performance of the proposed algorithms. We consider the general problem of the approximate joint diagonalization of a set of non-Hermitian matrices. This problem mainly arises in the data model of the joint blind source separation for two datasets. Based on a special parameterization of the two diagonalizing matrices and on adapted approximations of the classical cost function, we establish two Jacobi-like algorithms. They may serve for the canonical polyadic decomposition (CPD) of a third-order tensor, and in some scenarios they can outperform traditional CPD methods. Simulation results demonstrate the competitive performance of the proposed algorithms.
A Combined Countermeasure Against Side-Channel and Fault Attack with Threshold Implementation Technique
JIAO Zhipeng, CHEN Hua, FENG Jingyi, KUANG Xiaoyun, YANG Yiwei, LI Haoyuan, FAN Limin
, doi: 10.1049/cje.2021.00.089
摘要:
Side-channel attack (SCA) and Fault attack (FA) are two classical physical attacks against cryptographic implementation. In order to resist them, we present a combined countermeasure scheme which can resist both SCA and FA. The scheme combines the Threshold implementation (TI) and duplication-based exchange technique. The exchange technique can confuse the fault propagation path and randomize the faulty values. The TI technique can ensure a provable security against SCA. Moreover, it can also help to resist the FA by its incomplete property and random numbers. Compared with other methods, the proposed scheme has simple structure, which can be easily implemented in hardware and result in a low implementation cost. Finally, we present a detailed design for the block cipher LED and implement it. The hardware cost evaluation shows our scheme has the minimum overhead factor. Side-channel attack (SCA) and Fault attack (FA) are two classical physical attacks against cryptographic implementation. In order to resist them, we present a combined countermeasure scheme which can resist both SCA and FA. The scheme combines the Threshold implementation (TI) and duplication-based exchange technique. The exchange technique can confuse the fault propagation path and randomize the faulty values. The TI technique can ensure a provable security against SCA. Moreover, it can also help to resist the FA by its incomplete property and random numbers. Compared with other methods, the proposed scheme has simple structure, which can be easily implemented in hardware and result in a low implementation cost. Finally, we present a detailed design for the block cipher LED and implement it. The hardware cost evaluation shows our scheme has the minimum overhead factor.
13
A Semi-Shared Hierarchical Joint Model for Sequence Labeling
LIU Gongshen, DU Wei, ZHOU Jie, LI Jing, CHENG Jie
, doi: 10.1049/cje.2020.00.363
摘要:
Multi-task learning is an essential yet practical mechanism for improving overall performance in various machine learning fields. Owing to the linguistic hierarchy, the hierarchical joint model is a common architecture used in natural language processing. However, in the state-of-the-art hierarchical joint models, higher-level tasks only share bottom layers or latent representations with lower-level tasks thus ignoring correlations between tasks at different levels, i.e., lower-level tasks cannot be instructed by the higher features. This paper investigates how to advance the correlations among various tasks supervised at different layers in an end-to-end hierarchical joint learning model. We propose a semi-shared hierarchical model that contains cross-layer shared modules and layer-specific modules. To fully leverage the mutual information between various tasks at different levels, we design four different dataflows of latent representations between the shared and layer-specific modules. Extensive experiments on CTB-7 & CONLL-09 show that our semi-shared approach outperforms basic hierarchical joint models on sequence tagging while having much fewer parameters. It inspires us that the proper implementation of the cross-layer sharing mechanism and residual shortcuts is promising to improve the performance of hierarchical joint NLP models while reducing the model complexity. Multi-task learning is an essential yet practical mechanism for improving overall performance in various machine learning fields. Owing to the linguistic hierarchy, the hierarchical joint model is a common architecture used in natural language processing. However, in the state-of-the-art hierarchical joint models, higher-level tasks only share bottom layers or latent representations with lower-level tasks thus ignoring correlations between tasks at different levels, i.e., lower-level tasks cannot be instructed by the higher features. This paper investigates how to advance the correlations among various tasks supervised at different layers in an end-to-end hierarchical joint learning model. We propose a semi-shared hierarchical model that contains cross-layer shared modules and layer-specific modules. To fully leverage the mutual information between various tasks at different levels, we design four different dataflows of latent representations between the shared and layer-specific modules. Extensive experiments on CTB-7 & CONLL-09 show that our semi-shared approach outperforms basic hierarchical joint models on sequence tagging while having much fewer parameters. It inspires us that the proper implementation of the cross-layer sharing mechanism and residual shortcuts is promising to improve the performance of hierarchical joint NLP models while reducing the model complexity.
Z11
Multipath Suppressing Method Based on Pseudorange Model Using Modified Teaching-Learning Based Optimization Algorithm
CHENG Lan, ZHANG Jing, NI Zihang, YAN Gaowei
, doi: 10.1049/cje.2020.00.168
摘要:
Satellites based positioning has been widely applied to many areas in our daily lives and thus become indispensable, which also leads to increasing demand for high-positioning accuracy. In some complex environments (such as dense urban, valley), multipath interference is one of the main error sources deteriorating positioning accuracy, and it is difficult to eliminate via differential techniques due to its uncertainty of occurrence and irrelevance in different instants. To address this problem, we propose a positioning method for global navigation satellite systems (GNSS) by adopting a modified teaching-learning based optimization (TLBO) algorithm after the positioning problem is formulated as an optimization problem. Experiments are conducted by using actual satellite data. The results show that the proposed positioning algorithm outperforms other algorithms, such as particle swarm optimization based positioning algorithm, differential evolution based positioning algorithm, variable projection method, and TLBO algorithm, in terms of accuracy and stability. Satellites based positioning has been widely applied to many areas in our daily lives and thus become indispensable, which also leads to increasing demand for high-positioning accuracy. In some complex environments (such as dense urban, valley), multipath interference is one of the main error sources deteriorating positioning accuracy, and it is difficult to eliminate via differential techniques due to its uncertainty of occurrence and irrelevance in different instants. To address this problem, we propose a positioning method for global navigation satellite systems (GNSS) by adopting a modified teaching-learning based optimization (TLBO) algorithm after the positioning problem is formulated as an optimization problem. Experiments are conducted by using actual satellite data. The results show that the proposed positioning algorithm outperforms other algorithms, such as particle swarm optimization based positioning algorithm, differential evolution based positioning algorithm, variable projection method, and TLBO algorithm, in terms of accuracy and stability.
导航对抗
Necessary Condition for the Success of Synchronous GNSS Spoofing
WANG Yiwei, KOU Yanhong, HUANG Zhigang
, doi: 10.1049/cje.2021.00.307
摘要:
A synchronous GNSS generator spoofer aims at directly taking over the tracking loops of the receiver with the lowest possible spoofing to signal ratio (SSR) without forcing it to lose lock. This paper investigates the factors that affect spoofing success and their relationships. The necessary conditions for successful spoofing are obtained by deriving the code tracking error in the presence of spoofing and analyzing the effects of SSR, spoofing synchronization errors, and receiver settings on the S-curve ambiguity and code tracking trajectory. The minimum SSRs for a successful spoofing calculated from the theoretical formulation agree with Monte Carlo simulations at digital intermediate frequency signal level within 1 dB when the spoofer pulls the code phase in the same direction as the code phase synchronization error, and the required SSRs can be much lower when pulling in the opposite direction. The maximum spoofing code phase error for a successful spoofing is tested by using TEXBAT datasets, which coincides with the theoretical results within 0.1 chip. This study reveals the mechanism of covert spoofing and can play a constructive role in the future development of spoofing and anti-spoofing methods. A synchronous GNSS generator spoofer aims at directly taking over the tracking loops of the receiver with the lowest possible spoofing to signal ratio (SSR) without forcing it to lose lock. This paper investigates the factors that affect spoofing success and their relationships. The necessary conditions for successful spoofing are obtained by deriving the code tracking error in the presence of spoofing and analyzing the effects of SSR, spoofing synchronization errors, and receiver settings on the S-curve ambiguity and code tracking trajectory. The minimum SSRs for a successful spoofing calculated from the theoretical formulation agree with Monte Carlo simulations at digital intermediate frequency signal level within 1 dB when the spoofer pulls the code phase in the same direction as the code phase synchronization error, and the required SSRs can be much lower when pulling in the opposite direction. The maximum spoofing code phase error for a successful spoofing is tested by using TEXBAT datasets, which coincides with the theoretical results within 0.1 chip. This study reveals the mechanism of covert spoofing and can play a constructive role in the future development of spoofing and anti-spoofing methods.
14缩页
An Interactive Perception Method based Collaborative Rating Prediction Algorithm
YAN Wenjie, ZHANG Jiahao, LI Ziqi
, doi: 10.1049/cje.2022.00.034
摘要:
To solve the rating prediction problems of low accuracy and data sparsity on different datasets, we propose an interactive perception method based collaborative rating prediction algorithm (DCAE-MF), by fusing dual convolutional autoencoder (Dual-CAE) and probability matrix factorization (PMF). Deep latent representations of users and items are captured simultaneously by Dual-CAE and are deeply integrated with PMF to collaboratively make rating predictions based on the known rating history of users. A global multi-angle collaborative optimization learning method is developed to effectively optimize all the parameters of DCAE-MF. Extensive experiments are performed on seven real-world datasets to demonstrate the superiority of DCAE-MF on the key rating accuracy metrics of RMSE and MAE. To solve the rating prediction problems of low accuracy and data sparsity on different datasets, we propose an interactive perception method based collaborative rating prediction algorithm (DCAE-MF), by fusing dual convolutional autoencoder (Dual-CAE) and probability matrix factorization (PMF). Deep latent representations of users and items are captured simultaneously by Dual-CAE and are deeply integrated with PMF to collaboratively make rating predictions based on the known rating history of users. A global multi-angle collaborative optimization learning method is developed to effectively optimize all the parameters of DCAE-MF. Extensive experiments are performed on seven real-world datasets to demonstrate the superiority of DCAE-MF on the key rating accuracy metrics of RMSE and MAE.
9必上
An Accurate Near-Field Distance Estimation Differential Algorithm
ZHAO Yan, TAO Haihong, CHANG Xin
, doi: 10.1049/cje.2021.00.174
摘要:
The triangular geometry is the basis of near field array accurate distance estimation algorithms. The fisher expression of traditional distance estimation is derived by utilizing the Taylor series. To improve convergence rate and estimation accuracy, a novel iterative distance estimation algorithm is proposed with differential equations based on the triangular geometry. Firstly, its convergence performance is analysed in detail. Secondly, the selection of the initial value and the number of iterations are respectively studied. Thirdly, compared with the traditional estimation algorithms by utilizing the fisher approximation, the proposed algorithm has a higher convergence rate and estimation accuracy. Moreover, its pseudocode is presented. Finally, the experiment results and performance analysis are provided to verify the effectiveness of the proposed algorithm. The triangular geometry is the basis of near field array accurate distance estimation algorithms. The fisher expression of traditional distance estimation is derived by utilizing the Taylor series. To improve convergence rate and estimation accuracy, a novel iterative distance estimation algorithm is proposed with differential equations based on the triangular geometry. Firstly, its convergence performance is analysed in detail. Secondly, the selection of the initial value and the number of iterations are respectively studied. Thirdly, compared with the traditional estimation algorithms by utilizing the fisher approximation, the proposed algorithm has a higher convergence rate and estimation accuracy. Moreover, its pseudocode is presented. Finally, the experiment results and performance analysis are provided to verify the effectiveness of the proposed algorithm.
11
De-convolution and De-Noising of SAR based GPS images using Hybrid Particle Swarm Optimization
RIZWAN Sadiq, MUHAMMAD B. Qureshi, MUHAMMAD M. Jadoon
, doi: 10.1049/cje.2021.00.138
摘要:
synthetic aperture radar (SAR) imaging is an efficient strategy which exploits the properties of microwaves to capture images. A major concern in SAR imaging is the reconstruction of image from back scattered signals in the presence of noise. The reflected signal consist of more noise than the target signal and it is a challenging problem to reduce the noise in the collected signal for better reconstruction of an image. Current studies mostly focus on filtering techniques for noise removal. This can result in an undesirable point spread function (PSF) causing extreme smearing effect in the desired image. In order to handle this problem, a computational technique, particle swarm optimization (PSO) is used for de-noising purpose and later the target performance is further improved by an amalgamation of Wiener filter. Moreover, to improve the de-noising performance we have exploited the singular value decomposition based morphological filtering. To justify the proposed improvements we have simulated the proposed techniques and results are compared with the conventional existing models. The proposed method revealed considerable decrease in mean square error (MSE) compared to Wiener filter and PSO techniques. Quantitative analysis of image restoration quality are also presented in comparison with Wiener filter and PSO based on the improvement in signal to noise ratio (ISNR) and peak signal to noise ratio (PSNR). synthetic aperture radar (SAR) imaging is an efficient strategy which exploits the properties of microwaves to capture images. A major concern in SAR imaging is the reconstruction of image from back scattered signals in the presence of noise. The reflected signal consist of more noise than the target signal and it is a challenging problem to reduce the noise in the collected signal for better reconstruction of an image. Current studies mostly focus on filtering techniques for noise removal. This can result in an undesirable point spread function (PSF) causing extreme smearing effect in the desired image. In order to handle this problem, a computational technique, particle swarm optimization (PSO) is used for de-noising purpose and later the target performance is further improved by an amalgamation of Wiener filter. Moreover, to improve the de-noising performance we have exploited the singular value decomposition based morphological filtering. To justify the proposed improvements we have simulated the proposed techniques and results are compared with the conventional existing models. The proposed method revealed considerable decrease in mean square error (MSE) compared to Wiener filter and PSO techniques. Quantitative analysis of image restoration quality are also presented in comparison with Wiener filter and PSO based on the improvement in signal to noise ratio (ISNR) and peak signal to noise ratio (PSNR).
Z10
A Cross-Domain Ontology Semantic Representation Based on NCBI-BlueBERT Embedding
ZHAO Lingling, WANG Junjie, WANG Chunyu, GUO Maozu
, doi: 10.1049/cje.2020.00.326
摘要:
A common but critical task in biological ontologies data analysis is to compare the difference between ontologies. There have been numerous ontology-based semantic-similarity measures proposed in specific ontology domain, but it still remains a challenge for cross-domain ontologies comparison. An ontology contains the scientific natural language description for the corresponding biological aspect. Therefore, we develop a new method based on natural language processing (NLP) representation model bidirectional encoder representations from transformers (BERT) for cross-domain semantic representation of biological ontologies. This article uses the BERT model to represent the word-level of the ontologies as a set of vectors, facilitating the semantic analysis or comparing the biomedical entities named in an ontology or associated with ontology terms. We evaluated the ability of our method in two experiments: calculating similarities of pair-wise disease ontology and human phenotype ontology terms and predicting the pair-wise of proteins interaction. The experimental results demonstrated the comparative performance. This gives promise to the development of NLP methods in biological data analysis. A common but critical task in biological ontologies data analysis is to compare the difference between ontologies. There have been numerous ontology-based semantic-similarity measures proposed in specific ontology domain, but it still remains a challenge for cross-domain ontologies comparison. An ontology contains the scientific natural language description for the corresponding biological aspect. Therefore, we develop a new method based on natural language processing (NLP) representation model bidirectional encoder representations from transformers (BERT) for cross-domain semantic representation of biological ontologies. This article uses the BERT model to represent the word-level of the ontologies as a set of vectors, facilitating the semantic analysis or comparing the biomedical entities named in an ontology or associated with ontology terms. We evaluated the ability of our method in two experiments: calculating similarities of pair-wise disease ontology and human phenotype ontology terms and predicting the pair-wise of proteins interaction. The experimental results demonstrated the comparative performance. This gives promise to the development of NLP methods in biological data analysis.
A Novel Robust Online Extreme Learning Machine for the Non-Gaussian Noise
GU Jun, ZOU Quanyi, DENG Changhui, WANG Xiaojun
, doi: 10.1049/cje.2021.00.122
摘要:
Samples collected from most industrial processes have two challenges: one is contaminated by the non-Gaussian noise, and the other is gradually obsolesced. This feature can obviously reduce the accuracy and generalization of models. To handle these challenges, a novel method, named the robust online extreme learning machine (RO-ELM), is proposed in this paper, in which the least mean \begin{document}$\boldsymbol{p}$\end{document}-power criterion is employed as the cost function which is to boost the robustness of the ELM, and the forgetting mechanism is introduced to discard the obsolescence samples. To investigate the performance of the RO-ELM, experiments on artificial and real-world datasets with the non-Gaussian noise are performed, and the datasets are from regression or classification problems. Results show that the RO-ELM is more robust than the ELM, the online sequential ELM (OS-ELM) and the OS-ELM with forgetting mechanism (FOS-ELM). The accuracy and generalization of the RO-ELM models are better than those of other models for online learning. Samples collected from most industrial processes have two challenges: one is contaminated by the non-Gaussian noise, and the other is gradually obsolesced. This feature can obviously reduce the accuracy and generalization of models. To handle these challenges, a novel method, named the robust online extreme learning machine (RO-ELM), is proposed in this paper, in which the least mean $\boldsymbol{p}$-power criterion is employed as the cost function which is to boost the robustness of the ELM, and the forgetting mechanism is introduced to discard the obsolescence samples. To investigate the performance of the RO-ELM, experiments on artificial and real-world datasets with the non-Gaussian noise are performed, and the datasets are from regression or classification problems. Results show that the RO-ELM is more robust than the ELM, the online sequential ELM (OS-ELM) and the OS-ELM with forgetting mechanism (FOS-ELM). The accuracy and generalization of the RO-ELM models are better than those of other models for online learning.
Z15
Research on Global Clock Synchronization Mechanism in Software-Defined Control Architecture
LV Shuyu, DAI Xinfa, MA Zhong, GAO Yi, HU Zhekun
, doi: 10.1049/cje.2021.00.059
摘要:
Adopt software-definition technology to decouple the functional components of the industrial control system (ICS) in a service-oriented and distributed form is an important way for the industrial Internet of things to integrate information technology, communication technology, and operation technology. Therefore, this paper presents the concept of software-defined control architecture and describes the time consistency requirements under the paradigm shift of ICS architecture. By analyzing the physical clock and virtual clock mechanism models, the global clock synchronization space is logically divided into the physical and virtual clock synchronization domains, and a formal description of the global clock synchronization space is proposed. According to the fundamental analysis of the clock state model, the physical clock linear filtering synchronization model is derived, and a distributed observation fusion filtering model is constructed by considering the two observation modes of the virtual clock to realize the time synchronization of the global clock space by way of timestamp layer-by-layer transfer and fusion estimation. Finally, the simulation results show that the proposed model can significantly improve the accuracy and stability of clock synchronization. Adopt software-definition technology to decouple the functional components of the industrial control system (ICS) in a service-oriented and distributed form is an important way for the industrial Internet of things to integrate information technology, communication technology, and operation technology. Therefore, this paper presents the concept of software-defined control architecture and describes the time consistency requirements under the paradigm shift of ICS architecture. By analyzing the physical clock and virtual clock mechanism models, the global clock synchronization space is logically divided into the physical and virtual clock synchronization domains, and a formal description of the global clock synchronization space is proposed. According to the fundamental analysis of the clock state model, the physical clock linear filtering synchronization model is derived, and a distributed observation fusion filtering model is constructed by considering the two observation modes of the virtual clock to realize the time synchronization of the global clock space by way of timestamp layer-by-layer transfer and fusion estimation. Finally, the simulation results show that the proposed model can significantly improve the accuracy and stability of clock synchronization.
12
Ancient Character Recognition: A Novel Image Dataset of Shui Manuscript Characters and Classification Model
TANG Minli, XIE Shaomin, LIU Xiangrong
, doi: 10.1049/cje.2022.00.077
摘要:
Shui manuscripts are part of the national intangible cultural heritage of China. Owing to the particularity of text reading, the level of informatization and intelligence in the protection of Shui manuscript culture is not adequate. To address this issue, this study created Shuishu_C, the largest image dataset of Shui manuscript characters that has been reported. Furthermore, after extensive experimental validation, we proposed ShuiNet-A, a lightweight artificial neural network model based on the attention mechanism, which combines channel and spatial dimensions to extract key features and finally recognize Shui manuscript characters. The effectiveness and stability of ShuiNet-A were verified through multiple sets of experiments. Our results showed that, on the Shui manuscript dataset with 113 categories, the accuracy of ShuiNet-A was 99.8%, which is 1.5% higher than those of similar studies. The proposed model could contribute to the classification accuracy and protection of ancient Shui manuscript characters. Shui manuscripts are part of the national intangible cultural heritage of China. Owing to the particularity of text reading, the level of informatization and intelligence in the protection of Shui manuscript culture is not adequate. To address this issue, this study created Shuishu_C, the largest image dataset of Shui manuscript characters that has been reported. Furthermore, after extensive experimental validation, we proposed ShuiNet-A, a lightweight artificial neural network model based on the attention mechanism, which combines channel and spatial dimensions to extract key features and finally recognize Shui manuscript characters. The effectiveness and stability of ShuiNet-A were verified through multiple sets of experiments. Our results showed that, on the Shui manuscript dataset with 113 categories, the accuracy of ShuiNet-A was 99.8%, which is 1.5% higher than those of similar studies. The proposed model could contribute to the classification accuracy and protection of ancient Shui manuscript characters.
Hyperspectral Image Classification Based on A Multi-Scale Weighted Kernel Network
SUN Le, XU Bin, LU Zhenyu
, doi: 10.1049/cje.2021.00.130
摘要:
Recently, many deep learning models have shown excellent performance in hyperspectral image (HSI) classification. Among them, networks with multiple convolution kernels of different sizes have been proved to achieve richer receptive fields and extract more representative features than those with a single convolution kernel. However, in most networks, different-sized convolution kernels are usually used directly on multi-branch structures, and the image features extracted from them are fused directly and simply. In this paper, to fully and adaptively explore the multiscale information in both spectral and spatial domains of HSI, a novel multi-scale weighted kernel network (MSWKNet) based on an adaptive receptive field is proposed. First, the original HSI cubic patches are transformed to the input features by combining the principal component analysis and one-dimensional spectral convolution. Then, a three-branch network with different convolution kernels is designed to convolve the input features, and adaptively adjust the size of the receptive field through the attention mechanism of each branch. Finally, the features extracted from each branch are fused together for the task of classification. Experiments on three well-known hyperspectral data sets show that MSWKNet outperforms many deep learning networks in HSI classification. Recently, many deep learning models have shown excellent performance in hyperspectral image (HSI) classification. Among them, networks with multiple convolution kernels of different sizes have been proved to achieve richer receptive fields and extract more representative features than those with a single convolution kernel. However, in most networks, different-sized convolution kernels are usually used directly on multi-branch structures, and the image features extracted from them are fused directly and simply. In this paper, to fully and adaptively explore the multiscale information in both spectral and spatial domains of HSI, a novel multi-scale weighted kernel network (MSWKNet) based on an adaptive receptive field is proposed. First, the original HSI cubic patches are transformed to the input features by combining the principal component analysis and one-dimensional spectral convolution. Then, a three-branch network with different convolution kernels is designed to convolve the input features, and adaptively adjust the size of the receptive field through the attention mechanism of each branch. Finally, the features extracted from each branch are fused together for the task of classification. Experiments on three well-known hyperspectral data sets show that MSWKNet outperforms many deep learning networks in HSI classification.
Z崔7
Clustering for Topological Interference Management
JIANG Xue, ZHENG Baoyu, WANG Lei, HOU Xiaoyun
, doi: 10.1049/cje.2021.00.277
摘要:
To reduce the overhead and complexity of channel state information acquisition in interference alignment, the topological interference management (TIM) was proposed to manage interference, which only relied on the network topology information. The previous research on topological interference management via the low-rank matrix completion approach is known to be NP-hard. This paper considers the clustering method for the topological interference management problem, namely, the low-rank matrix completion for TIM is applied within each cluster. Based on the clustering result, we solve the low-rank matrix completion problem via nuclear norm minimization and Frobenius norm minimization function. Simulation results demonstrate that the proposed clustering method combined with TIM leads to significant gain on the achievable degrees of freedom. To reduce the overhead and complexity of channel state information acquisition in interference alignment, the topological interference management (TIM) was proposed to manage interference, which only relied on the network topology information. The previous research on topological interference management via the low-rank matrix completion approach is known to be NP-hard. This paper considers the clustering method for the topological interference management problem, namely, the low-rank matrix completion for TIM is applied within each cluster. Based on the clustering result, we solve the low-rank matrix completion problem via nuclear norm minimization and Frobenius norm minimization function. Simulation results demonstrate that the proposed clustering method combined with TIM leads to significant gain on the achievable degrees of freedom.
崔10
Gmean Maximum FSVMI Model and Its Application for Carotid Artery Stenosis Risk Prediction
ZHANG Xueying, GUO Yuling, LI Fenglian, WEI Xin, HU Fengyun, HUI Haisheng, JIA Wenhui
, doi: 10.1049/cje.2020.00.185
摘要:
Carotid artery stenosis is a serious medical condition that can lead to stroke. Using machine learning method to construct classifier model, carotid artery stenosis can be diagnosed with transcranial doppler data. We propose an improved fuzzy support vector machine (FSVMI) model to predict carotid artery stenosis, with the maximum geometric mean (Gmean) as the optimization target. The fuzzy membership function is obtained by combining information entropy with the normalized class-center distance. Experimental results showed that the proposed model was superior to the benchmark models in sensitivity and geometric mean criteria. Carotid artery stenosis is a serious medical condition that can lead to stroke. Using machine learning method to construct classifier model, carotid artery stenosis can be diagnosed with transcranial doppler data. We propose an improved fuzzy support vector machine (FSVMI) model to predict carotid artery stenosis, with the maximum geometric mean (Gmean) as the optimization target. The fuzzy membership function is obtained by combining information entropy with the normalized class-center distance. Experimental results showed that the proposed model was superior to the benchmark models in sensitivity and geometric mean criteria.
Z崔8
Representation of Semantic Word Embeddings Based on SLDA and Word2vec Model
TANG Huanling, ZHU Hui, WEI Hongmin, ZHENG Han, MAO Xueli, LU Mingyu, GUO Jin
, doi: 10.1049/cje.2021.00.113
摘要:
To solve the problem of semantic loss in text representation, this paper proposes a new embedding method of word representation in semantic space called wt2svec based on supervised latent Dirichlet allocation (SLDA) and Word2vec. It generates the global topic embedding word vector utilizing SLDA which can discover the global semantic information through the latent topics on the whole document set. It gets the local semantic embedding word vector based on the Word2vec. The new semantic word vector is obtained by combining the global semantic information with the local semantic information. Additionally, the document semantic vector named doc2svec is generated. The experimental results on different datasets show that wt2svec model can obviously promote the accuracy of the semantic similarity of words, and improve the performance of text categorization compared with Word2vec. To solve the problem of semantic loss in text representation, this paper proposes a new embedding method of word representation in semantic space called wt2svec based on supervised latent Dirichlet allocation (SLDA) and Word2vec. It generates the global topic embedding word vector utilizing SLDA which can discover the global semantic information through the latent topics on the whole document set. It gets the local semantic embedding word vector based on the Word2vec. The new semantic word vector is obtained by combining the global semantic information with the local semantic information. Additionally, the document semantic vector named doc2svec is generated. The experimental results on different datasets show that wt2svec model can obviously promote the accuracy of the semantic similarity of words, and improve the performance of text categorization compared with Word2vec.
10
Lexicon-Augmented Cross-domain Chinese Word Segmentation with Graph Convolutional Network
YU Hao, HUANG Kaiyu, WANG Yu, HUANG Degen
, doi: 10.1049/cje.2021.00.363
摘要:
Existing neural approaches have achieved significant progress for Chinese word segmentation (CWS). The performances of these methods tend to drop dramatically in the cross-domain scenarios due to the data distribution mismatch across domains and the out of vocabulary words problem. To address these two issues, proposes a lexicon-augmented graph convolutional network for cross-domain CWS. The novel model can capture the information of word boundaries from all candidate words and utilize domain lexicons to alleviate the distribution gap across domains. Experimental results on the cross-domain CWS datasets (SIGHAN-2010 and TCM) show that the proposed method successfully models information of domain lexicons for neural CWS approaches and helps to achieve competitive performance for cross-domain CWS. The two problems of cross-domain CWS can be effectively solved through various interactions between characters and candidate words based on graphs. Further, experiments on the CWS benchmarks (Bakeoff-2005) also demonstrate the robustness and efficiency of the proposed method. Existing neural approaches have achieved significant progress for Chinese word segmentation (CWS). The performances of these methods tend to drop dramatically in the cross-domain scenarios due to the data distribution mismatch across domains and the out of vocabulary words problem. To address these two issues, proposes a lexicon-augmented graph convolutional network for cross-domain CWS. The novel model can capture the information of word boundaries from all candidate words and utilize domain lexicons to alleviate the distribution gap across domains. Experimental results on the cross-domain CWS datasets (SIGHAN-2010 and TCM) show that the proposed method successfully models information of domain lexicons for neural CWS approaches and helps to achieve competitive performance for cross-domain CWS. The two problems of cross-domain CWS can be effectively solved through various interactions between characters and candidate words based on graphs. Further, experiments on the CWS benchmarks (Bakeoff-2005) also demonstrate the robustness and efficiency of the proposed method.
5期必上11
DeepHGNN: A Novel Deep Hypergraph Neural Network
LIN Jingjing, YE Zhonglin, ZHAO Haixing, FANG Lusheng
, doi: 10.1049/cje.2021.00.108
摘要:
With the development of deep learning, graph neural networks (GNNs) have yielded substantial results in various application fields. GNNs mainly consider the pair-wise connections and deal with graph-structured data. In many real-world networks, the relations between objects are complex and go beyond pair-wise. Hypergraph is a flexible modeling tool to describe intricate and higher-order correlations. The researchers have been concerned how to develop hypergraph-based neural network model. The existing hypergraph neural networks show better performance in node classification tasks and so on, while they are shallow network because of over-smoothing, over-fitting and gradient vanishment. To tackle these issues, we present a novel deep hypergraph neural network (DeepHGNN). We design DeepHGNN by using the technologies of sampling hyperedge, residual connection and identity mapping, residual connection and identity mapping bring from GCNs. We evaluate DeepHGNN on two visual object datasets. The experiments show the positive effects of DeepHGNN, and it works better in visual object classification tasks. With the development of deep learning, graph neural networks (GNNs) have yielded substantial results in various application fields. GNNs mainly consider the pair-wise connections and deal with graph-structured data. In many real-world networks, the relations between objects are complex and go beyond pair-wise. Hypergraph is a flexible modeling tool to describe intricate and higher-order correlations. The researchers have been concerned how to develop hypergraph-based neural network model. The existing hypergraph neural networks show better performance in node classification tasks and so on, while they are shallow network because of over-smoothing, over-fitting and gradient vanishment. To tackle these issues, we present a novel deep hypergraph neural network (DeepHGNN). We design DeepHGNN by using the technologies of sampling hyperedge, residual connection and identity mapping, residual connection and identity mapping bring from GCNs. We evaluate DeepHGNN on two visual object datasets. The experiments show the positive effects of DeepHGNN, and it works better in visual object classification tasks.
冯珂,查重报告
Linear Complexity of A Family of Binary p2q2-periodic Sequences From Euler Quotients
LUO Bingyu, ZHANG Jingwei, ZHAO Chang’an
, doi: 10.1049/cje.2020.00.125
摘要:
A family of binary sequences derived from Euler quotients \begin{document}$\psi(\cdot)$\end{document} with RSA modulus \begin{document}$pq$\end{document} is introduced. Here two primes \begin{document}$p $\end{document} and \begin{document}$q $\end{document} are distinct and satisfy \begin{document}$\gcd(pq, (p-1)(q-1))=1$\end{document}. The linear complexities and minimal polynomials of the proposed sequences are determined. Besides, this kind of sequences is shown not to have correlation of order \begin{document}$four$\end{document}, although there exists the following relation \begin{document}$\psi(t)-\psi(t+p^2q)-\psi(t+q^2p)+\psi(t+(p+q)pq)= $\end{document}\begin{document}$ 0 \pmod {pq}$\end{document} for any integer \begin{document}$t$\end{document} by the properties of Euler quotients. A family of binary sequences derived from Euler quotients $\psi(\cdot)$ with RSA modulus $pq$ is introduced. Here two primes $p $ and $q $ are distinct and satisfy $\gcd(pq, (p-1)(q-1))=1$. The linear complexities and minimal polynomials of the proposed sequences are determined. Besides, this kind of sequences is shown not to have correlation of order $four$, although there exists the following relation $\psi(t)-\psi(t+p^2q)-\psi(t+q^2p)+\psi(t+(p+q)pq)= $$ 0 \pmod {pq}$ for any integer $t$ by the properties of Euler quotients.