## Online First

Online First Papers are peer-reviewed and accepted for publication. Note that the papers under this directory, they are posted online prior to technical editing and author proofing. Please use with caution.
Display Method:

### Coming soon.....

, Available online  , doi: 10.1049/cje.2020.00.391
Abstract(555) HTML (249) PDF(35)
Abstract:
Software trustworthiness is an essential criterion for evaluating software quality. In component-based software, different components play different roles and different users give different grades of trustworthiness after using the software. The two elements will both affect the trustworthiness of software. When the software quality is evaluated comprehensively, it is necessary to consider the weight of component and user feedback. According to different construction of components, the different trustworthiness measurement models are established based on the weight of components and user feedback. Algorithms of these trustworthiness measurement models are designed in order to obtain the corresponding trustworthiness measurement value automatically. The feasibility of these trustworthiness measurement models is demonstrated by a train ticket purchase system.
, Available online  , doi: 10.1049/cje.2021.00.139
Abstract(81) HTML (32) PDF(6)
Abstract:
The similarity detection between two cross-platform binary functions has been applied in many fields, such as vulnerability detection, software copyright protection or malware classification. Current advanced methods for binary function similarity detection usually use semantic features, but have certain limitations. For example, practical applications may encounter instructions that have not been seen in training, which may easily cause the out of vocabulary (OOV) problem. In addition, the generalization of the extracted binary semantic features may be poor, resulting in a lower accuracy of the trained model in practical applications. To overcome these limitations, we propose a double-layer positional encoding based transformer model (DP-Transformer). The DP-Transformer’s encoder is used to extract the semantic features of the source instruction set architecture (ISA), which is called the source ISA encoder. Then, the source ISA encoder is fine-tuned by triplet loss while the target ISA encoder is trained. This process is called DP-MIRROR. When facing the same semantic basic block, the embedding vectors of the source and target ISA encoders are similar. Different from the traditional transformer which uses single-layer positional encoding, the double-layer positional encoding embedding can solve the OOV problem while ensuring the separation between instructions, so it is more suitable for the embedding of assembly instructions. Our comparative experiment results show that DP-MIRROR outperforms the state-of-the-art approach, MIRROR, by about 35 % in terms of precision at 1.
, Available online  , doi: 10.1049/cje.2021.00.125
Abstract(114) HTML (46) PDF(16)
Abstract:
Because a significant number of algorithms in computational science include search challenges and a large number of algorithms that can be transformed into search problems have garnered significant attention, especially the time rate and accuracy of search, a quantum walk search algorithm on hypergraphs, whose aim is to reduce time consumption and increase the readiness and controllability of search, is proposed in this paper. First, the data points are divided into groups and then isomorphic to the permutation set. Second, the element coordinates in the permutation set are adopted to mark the position of the data points. Search the target data by the controllable quantum walk with multiparticle on the ring. By controlling the coin operator of quantum walk, it is determined that search algorithm can increase the accuracy and controllability of search. It is determined that search algorithm can reduce time consumption by increasing the number of search particles. It also provides a new direction for the design of quantum walk algorithms, which may eventually lead to entirely new algorithms.
, Available online  , doi: 10.1049/cje.2020.00.268
Abstract(78) HTML (32) PDF(11)
Abstract:
Non-intrusive load monitoring (NILM) can infer the status of the appliances in operation and their energy consumption by analyzing the energy data collected from monitoring devices. With the rapid increase of electric loads in amount and type, the traditional way of uploading all energy data to cloud is facing enormous challenges. It becomes increasingly significant to construct distinguishable load signatures and build robust classification models for NILM. In this paper, we propose a load signature construction method for load recognition task in home scenarios. The load signature is based on the Gramian angular field encoding theory, which is convenient to construct and significantly reduces the data transmission volume of the network. Moreover, edge computing architecture can reasonably utilize computing resources and relieve the pressure of the server. The experimental results on NILM datasets demonstrate that the proposed method obtains superior performance in the recognition of household appliances under insufficient resources.
, Available online  , doi: 10.1049/cje.2020.00.217
Abstract(124) HTML (50) PDF(11)
Abstract:
The dynamic code loading mechanism of the Android system allows an application to load executable files externally at runtime. This mechanism makes the development of applications more convenient, but it also brings security issues. Applications that hide malicious behavior in the external file by dynamic code loading are becoming a new challenge for Android malware detection. To overcome this challenge, based on dynamic code loading mechanisms, three types of threat models, i.e. Model I, Model II, and Model III are defined. For the Model I type malware, its malicious behavior occurs in DexCode, so the application programming interface (API) classes were used to characterize the behavior of the DexCode file. For the Model II type and Model III type malwares whose malicious behaviors occur in an external file, the permission complement is defined to characterize the behaviors of the external file. Based on permission complement and API calls, an Android malicious application detection method is proposed, of which feature sets are constructed by improving a feature selection method. Five datasets containing 15,581 samples are used to evaluate the performance of the proposed method. The experimental results show that our detection method achieves accuracy of 99.885% on general dataset, and performes the best on all evaluation metrics on all datasets in all comparison methods.
, Available online  , doi: 10.1049/cje.2021.00.092
Abstract(326) HTML (146) PDF(23)
Abstract:
Many cryptanalytic techniques for symmetric-key primitives rely on specific statistical analysis to extract some secrete key information from a large number of known or chosen plaintext-ciphertext pairs. For example, there is a standard statistical model for differential cryptanalysis that determines the success probability and complexity of the attack given some predefined configurations of the attack. In this work, we investigate the differential attack proposed by Guo et al. at Fast Software Encryption Conference 2020 and find that in this attack, the statistical behavior of the counters for key candidates deviate from standard scenarios, where both the correct key and the correct key xor specific difference are expected to receive the largest number of votes. Based on this bimodal behavior, we give three different statistical models for truncated differential distinguisher on CRAFT (a cryptographic algorithm name) for bimodal phenomena. Then, we provide the formulas about the success probability and data complexity for different models under the condition of a fixed threshold value. Also, we verify the validity of our models for bimodal phenomena by experiments on round-reduced of the versions distinguishers on CRAFT. We find that the success probability of theory and experiment are close when we fix the data complexity and threshold value. Finally, we compare the three models using the mathematical tool Matlab and conclude that Model 3 has better performance.
, Available online  , doi: 10.1049/cje.2021.00.274
Abstract(67) HTML (31) PDF(12)
Abstract:
With the popularization and development of social software, more and more people join the social network, which produces a lot of valuable information, but also contains plenty of sensitive privacy information. To achieve the personalized privacy protection of massive social network relational data, a privacy enhancement method for social networks relational data based on personalized differential privacy is proposed. And a dimensionality reduction segmentation sampling (DRS-S) algorithm is proposed to implement this method. First, in order to solve the problem of inefficiency caused by the excessive amount of data in social networks, dimension reduction and segmentation are carried out to divide the data into groups. According to the privacy protection requirements of different users, we adopt sampling method to protect users with different privacy requirements at different levels, so as to realize personalized different privacy. After that, the noise is added to the protected data to satisfy the privacy budget. Then publish the social network data. Finally, the proposed algorithm is compared with the traditional personalized differential privacy (PDP) algorithm and privacy preserving approach based on clustering and noise (PBCN) in real data set, the experimental results demonstrate that the quality of privacy protection and data availability of DRS-S are better than that of PDP algorithm and PBCN algorithm.
, Available online  , doi: 10.1049/cje.2020.00.049
Abstract(84) HTML (29) PDF(11)
Abstract:
By allowing intermediate nodes to combine multiple packets before forwarding them, the concept of network coding in multi-cast networks can provide maximum possible information flow. However, this also means traditional encryption methods are less applicable, since the different public-keys of receivers imply different ciphertexts which cannot be easily combined by network coding. While network coding itself may provide confidentiality, its effectiveness heavily depends on the underlying network topology and ability of the eavesdroppers. Finally, broadcast encryption and group key agreement techniques both allow a sender to broadcast the same ciphertext to all the receivers, although they rely on the assumptions of trusted key servers or secure channels. In this paper, we propose a novel public-key encryption concept with a single public-key for encryption and multiple secret keys for decryption (MSK-PK), which has limited ciphertext expansion and does not require trusted key servers or secure channels. To demonstrate the feasibility of this concept, we construct a concrete scheme based on a class of lattice-based multi-trapdoor functions. We prove that those functions satisfy the one-wayness property and can resist the nearest plane algorithm.
, Available online  , doi: 10.1049/cje.2020.00.125
Abstract(82) HTML (31) PDF(8)
Abstract:
A family of binary sequences derived from Euler quotients $\psi(\cdot)$ with RSA modulus $pq$ is introduced. Here two primes $p$ and $q$ are distinct and satisfy $\gcd(pq, (p-1)(q-1))=1$. The linear complexities and minimal polynomials of the proposed sequences are determined. Besides, this kind of sequences is shown not to have correlation of order $four$, although there exists the following relation $\psi(t)-\psi(t+p^2q)-\psi(t+q^2p)+\psi(t+(p+q)pq)=$$0 \pmod {pq}$ for any integer $t$ by the properties of Euler quotients.
, Available online  , doi: 10.1049/cje.2020.00.206
Abstract(261) HTML (107) PDF(29)
Abstract:
Data recovery from flash memory in the mobile device can effectively reduce the loss caused by data corruption. Type recognition of data fragment is an essential prerequisite to the low-level data recovery. Previous works in this field classify data fragment based on its file type. Still, the classification efficiency is low, especially when the data fragment is a part of a composite file. We propose a fine-grained approach to classifying data fragment from the low-level flash memory to improve the classification accuracy and efficiency. The proposed method redefines flash-memory-page data recognition problem based on the encoding format of the data segment, and applies a hybrid machine learning algorithm to detect the data type of the flash page. The hybrid algorithm can significantly decompose the given data space and reduce the cost of training. The experimental results show that our method achieves better classification accuracy and higher time performance than the existing methods.
, Available online  , doi: 10.1049/cje.2021.00.269
Abstract(354) HTML (155) PDF(48)
Abstract:
Edge-cloud collaborative application scenario is more complex, it involves collaborative operations among different security domains, frequently accessing and exiting application system of mobile terminals. A cross-domain identity authentication protocol based on privacy protection is proposed. The main advantages of the protocol are as follows. 1) Self-certified key generation algorithm: the public/private key pair of the mobile terminal is generated by the terminal members themselves. It avoids security risks caused by third-party key distribution and key escrow; 2) Cross-domain identity authentication: the alliance keys are calculated among edge servers through blockchain technology. Cross-domain identity authentication is realized through the signature authentication of the alliance domain. The cross-domain authentication process is simple and efficient; 3) Revocability of identity authentication: When the mobile terminal has logged off or exited the system, the legal identity of the terminal in the system will also become invalid immediately, so as to ensure the forward and backward security of accessing system resources. Under the hardness assumption of discrete logarithm problem and computational Diffie-Hellman problem, the security of the protocol is proven, and the efficiency of the protocol is verified.
, Available online  , doi: 10.1049/cje.2020.00.414
Abstract(279) HTML (96) PDF(22)
Abstract:
Residual computation is an effective method for gray-scale image steganalysis. For binary images, the residual computation calculated by the XOR operation is also employed in the Local residual patterns (LRP) model for steganalysis. In this paper, a binary image steganalytic scheme based on Symmetrical local residual patterns (SLRP) is proposed. The symmetrical relationships among residual patterns are introduced that make the features more compact while reducing the dimensionality of the features set. Multi-scale windows are utilized to construct three SLRP submodels which are further merged to construct the final features set instead of a single model. What's more, SLRPs with higher probability to be modified after embedding are emphasized and selected to construct the feature sets for training the SVM classifier. Finally, experimental results show that the proposed steganalytic scheme is effective for detecting binary image steganography.
, Available online  , doi: 10.1049/cje.2021.00.294
Abstract(143) HTML (61) PDF(20)
Abstract:
Quantum algorithms are raising concerns in the field of cryptography all over the world. A growing number of symmetric cryptography algorithms have been attacked in the quantum setting. Type-3 generalized Feistel scheme (GFS) and unbalanced Feistel scheme with expanding functions (UFS-E) are common symmetric cryptography schemes, which are often used in cryptographic analysis and design. We propose quantum attacks on the two Feistel schemes. For $d$-branch Type-3 GFS and UFS-E, we propose distinguishing attacks on $(d+1)$-round Type-3 GFS and UFS-E in polynomial time in the quantum chosen plaintext attack (qCPA) setting. We propose key recovery by applying Grover's algorithm and Simon's algorithm. For $r$-round $d$-branch Type-3 GFS with $k$-bit length subkey, the complexity is $O({2^{(d - 1)(r - d - 1)k/2}})$ for $r\ge d + 2$. The result is better than that based on exhaustive search by a factor ${2^{({d^2} - 1)k/2}}$. For $r$-round $d$-branch UFS-E, the attack complexity is $O({2^{(r - d - 1)(r - d)k/4}})$ for $d + 2 \le r \le 2d$, and $O({2^{(d - 1)(2r - 3d)k/4}})$ for $r > 2d$. The results are better than those based on exhaustive search by factors ${2^{(4rd - {d^2} - d - {r^2} - r)k/4}}$ and ${2^{3(d - 1)dk/4}}$ in the quantum setting, respectively.
, Available online  , doi: 10.1049/cje.2021.00.363
Abstract(27) HTML (14) PDF(6)
Abstract:
Existing neural approaches have achieved significant progress for Chinese word segmentation (CWS). The performances of these methods tend to drop dramatically in the cross-domain scenarios due to the data distribution mismatch across domains and the out of vocabulary (OOV) words problem. To address these two issues, proposes a lexicon-augmented graph convolutional network for cross-domain CWS. The novel model can capture the information of word boundaries from all candidate words and utilize domain lexicons to alleviate the distribution gap across domains. Experimental results on the cross-domain CWS datasets (SIGHAN-2010 and TCM) show that the proposed method successfully models information of domain lexicons for neural CWS approaches and helps to achieve competitive performance for cross-domain CWS. The two problems of cross-domain CWS can be effectively solved through various interactions between characters and candidate words based on graphs. Further, experiments on the CWS benchmarks (Bakeoff-2005) also demonstrate the robustness and efficiency of the proposed method.
, Available online  , doi: 10.1049/cje.2021.00.217
Abstract(137) HTML (55) PDF(21)
Abstract:
In the traditional quantum wolf pack algorithm, the wolf pack distribution is simplified, and the leader wolf is randomly selected. This leads to the problems that the development and exploration ability of the algorithm is weak and the rate of convergence is slow. Therefore, a quantum wolf pack evolutionary algorithm of weight decision-making based on fuzzy control is proposed in this paper. First, to realize the diversification of wolf pack distribution and the regular selection of the leader wolf, a dual strategy method and sliding mode cross principle are adopted to optimize the selection of the quantum wolf pack initial position and the candidate leader wolf. Second, a new non-linear convergence factor is adopted to improve the leader wolf’s search direction operator to enhance the local search capability of the algorithm. Meanwhile, a weighted decision-making strategy based on fuzzy control and the quantum evolution computation method is used to update the position of the wolf pack and enhance the optimization ability of the algorithm. Then, a functional analysis method is adopted to prove the convergence of the quantum wolf pack algorithm, thus realizing the feasibility of the algorithm’s global convergence. The performance of the quantum wolf pack algorithm of weighted decision-making based on fuzzy control was verified through six standard test functions. The optimization results are compared with the standard wolf pack algorithm and the quantum wolf pack algorithm. Results show that the improved algorithm had a faster rate of convergence, higher convergence precision, and stronger development and exploration ability.
, Available online  , doi: 10.1049/cje.2021.00.113
Abstract(183) HTML (80) PDF(17)
Abstract:
To solve the problem of semantic loss in text representation, this paper proposes a new embedding method of word representation in semantic space called wt2svec based on SLDA(Supervised LDA) and Word2vec. It generates the global topic embedding word vector utilizing SLDA which can discover the global semantic information through the latent topics on the whole document set. It gets the local semantic embedding word vector based on the Word2vec. The new semantic word vector is obtained by combining the global semantic information with the local semantic information. Additionally, the document semantic vector named doc2svec is generated. The experimental results on different datasets show that wt2svec model can obviously promote the accuracy of the semantic similarity of words, and improve the performance of text categorization compared with Word2vec.
, Available online  , doi: 10.1049/cje.2021.00.214
Abstract(106) HTML (36) PDF(16)
Abstract:
Since the basic probability of an interval-valued belief structure (IBS) is assigned as interval number, its combination becomes difficult. Especially, when dealing with highly conflicting IBSs, most of the existing combination methods may cause counter-intuitive results, which can bring extra heavy computational burden due to nonlinear optimization model, and lose the good property of associativity and commutativity in Dempster-Shafer theory (DST). To address these problems, a novel conflicting IBSs combination method named CSUI (conflict, similarity, uncertainty, intuitionistic fuzzy sets)-DST method is proposed by introducing a similarity measurement to measure the degree of conflict among IBSs, and an uncertainty measurement to measure the degree of discord, non-specificity and fuzziness of IBSs. Considering these two measures at the same time, the weight of each IBS is determined according to the modified reliability degree. From the perspective of intuitionistic fuzzy sets, we propose the weighted average IBSs combination rule by the addition and number multiplication operators. The effectiveness and rationality of this combination method are validated with two numerical examples and its application in target recognition.
, Available online  , doi: 10.1049/cje.2021.00.032
Abstract(169) HTML (71) PDF(26)
Abstract:
Graphene solution-gated field effect transistors (G-SgFETs) have been widely developed in the field of biosensors, but deficiencies in their theories still exist. A theoretical model for G-SgFET, including the three-terminal equivalent circuit model and the numerically calculating method, is proposed by the comprehensive analyses of the graphene-liquid interface and the FET principle. Not only the applied voltages on the electrode-pairs of gate-source and drain-source, but also the nature of graphene and its derivatives are considered by analysing their influences on the Fermi level, the carriers’ concentration and mobility, which may consequently affect the output drain-source current. To verify whether it is available for G-SgFETs based on different method prepared graphene, three kinds of graphene materials which are liquid-phase exfoliated graphene, reduced graphene oxide (rGO), and tetra (4-aminophenyl) porphyrin hybridized rGO are used as examples. The coincidences of calculated output and transfer feature curves with the measured ones are obtained to confirm its adaptivity for simulating the basic G-SgFETs’ electric features, by modulating Fermi level and mobility. Furthermore, the model is exploited to simulate G-SgFETs’ current responding to the biological functionalization with aptamer and the detections for circulating tumor cells, as a proof-of-concept. The calculated current changes are compared with the experimental results, to verify the proposed G-SgFETs’ model is also suitable for mimicking the bio-electronic responding, which may give a preview of some conceived G-SgFETs’ biosensors and improve the design efficiency.
, Available online  , doi: 10.1049/cje.2021.00.057
Abstract(139) HTML (60) PDF(16)
Abstract:
This paper presents a high-precision, successive approximation register (SAR) analog-to-digital converter (ADC) with resistive analog front-end for low-voltage and wide input range applications. To suppress the serious nonlinearity brought by the voltage coefficients of analog front-end without deteriorating differential nonlinearity performance, a mixed-signal calibration scheme based on piecewise-linear method with calibration digital-to-analog converter is proposed. A compensation current is designed to sink or source from the reference to keep it independent of input signal, which greatly improves the linearity performance. Fabricated in a 0.5- μ m CMOS process, the proposed ADC achieves 88-dB signal-to-noise-and-distortion ratio and 103-dB spurious free dynamic range with 5-V supply voltage and 2.5-V reference voltage, and the total power consumption is 37.5 mW.
, Available online  , doi: 10.1049/cje.2020.00.178
Abstract(106) HTML (47) PDF(9)
Abstract:
In this paper, we numerically demonstrated the possibility of using wurtzite boron gallium nitride (W-BGaN) as active layers (quantum well and quantum barriers) along with aluminum gallium nitride (AlGaN) to achieve lasing at a deep ultraviolet range at 263 nm for edge emitting laser diode. The laser diode structure simulations were conducted by using the Crosslight-LASTIP software with a self-consistency model for varies quantity calculations. Moreover, multiple designed structures such as full and half have been achieved as well as the study of the effect of grading engineering/techniques at the electron blocking layer for linearly-graded-down and linearly-graded-up grading techniques were also emphasized. As a result, a maximum emitted power of 26 W, a minimum threshold current of 308 mA, a slope efficiency of 2.82 W/A, and a minimum p-type resistivity of 0.228 Ω•cm from the different doping concentrations and geometrical distances were thoroughly observed and jotted down.
, Available online  , doi: 10.1049/cje.2020.00.315
Abstract(114) HTML (44) PDF(17)
Abstract:
A high-efficiency waveguide slot array antenna with low sidelobe level (SLL) is investigated for W-band applications. The silicon micromachining technology is utilized to realize multilayer antenna architecture by three key steps of selective etching, gold plating and Au-Au bonding. The radiating slot based on this technique becomes thick with a minimum thickness of 0.2 mm and accompanies with the decrease of slot’s radiation ability. To overcome this weakness, a stepped radiation cavity is loaded on the slot. The characteristic of cavity-loaded slot is investigated to synthesize the low-SLL array antenna. The unequal hybrid corporate feeding network is constructed to achieve sidelobe suppression in the E-plane. A pair of 16 × 8 low-SLL and high-effciency slot arrays is fashioned and confirmed experimentally. The bandwidth with the radiation effciency higher than 80 % is 92.3–96.3 GHz. The SLLs in both E- and H-planes are below −19 dB.
, Available online  , doi: 10.1049/cje.2020.00.286
Abstract(67) HTML (29) PDF(7)
Abstract:
The passive noise-shaping successive approximation register (NS-SAR) analog-to-digital converter (ADC) demonstrates high performance in resolution improvement, power reduction, and process scaling, while its charge-sharing loss and limited bandwidth weaken the noise-shaping effect. This paper presents a first-order NS-SAR ADC based on error-feedback (EF) structure to realize high-efficiency noise shaping. It employs a lossless EF path by using a set of ping-pong switching capacitors with passive signal-residue summation technique. The proposed first-order EF NS-SAR prototype can be promoted to multi-order structure with the minor modification. Verified by simulation in 65-nm CMOS process, the proposed 9-bit NS-SAR ADC consumes 183.66 μ W when operating at 20 MS/s with the supply voltage of 1.2 V. At the oversampling ratio of 16, it achieves a peak signal-to-noise-and-distortion ratio of 81 dB, yielding Schreier figure of merit (FOM) of 176.32 dB.
, Available online  , doi: 10.1049/cje.2021.00.283
Abstract(87) HTML (29) PDF(5)
Abstract:
This work presents a novel plane-based area-saving control BUS design with distributed registers in 3D NAND flash memory. 99.47% control signal routing wires are reduced compared to the conventional control circuit design. Independent multi-plane read is compatible with the existing read operations thanks to the register addresses are reasonably assigned. Furthermore, power-saving register group address-based plane gating scheme is proposed which saves about 2.9mW BUS toggling power. A four-plane control BUS design with 20K-bits registers has been demonstrated in FPGA tester. The results show that the plane-based control BUS design is beneficial to high-performance 3D NAND flash memory design.
, Available online  , doi: 10.1049/cje.2020.00.168
Abstract(80) HTML (27) PDF(10)
Abstract:
Satellites based positioning has been widely applied to many areas in our daily lives and thus become indispensable, which also leads to increasing demand for high-positioning accuracy. In some complex environments (such as dense urban, valley), multipath interference is one of the main error sources deteriorating positioning accuracy, and it is difficult to eliminate via differential techniques due to its uncertainty of occurrence and irrelevance in different instants. To address this problem, we propose a positioning method for global navigation satellite systems (GNSS) by adopting a modified teaching-learning based optimization (TLBO) algorithm after the positioning problem is formulated as an optimization problem. Experiments are conducted by using actual satellite data. The results show that the proposed positioning algorithm outperforms other algorithms, such as particle swarm optimization (PSO) based positioning algorithm, differential evolution (DE) based positioning algorithm, variable projection (VP) method, and TLBO algorithm, in terms of accuracy and stability.
, Available online  , doi: 10.1049/cje.2021.00.277
Abstract(63) HTML (28) PDF(6)
Abstract:
To reduce the overhead and complexity of channel state information (CSI) acquisition in interference alignment (IA), the topological interference management (TIM) was proposed to manage interference, which only relied on the network topology information. The previous research on topological interference management via the low-rank matrix completion approach (LRMC) is known to be NP-hard. This paper considers the clustering method for the topological interference management problem, namely, the low-rank matrix completion for TIM is applied within each cluster. Based on the clustering result, we solve the low-rank matrix completion problem via nuclear norm minimization and Frobenius norm minimization function. Simulation results demonstrate that the proposed clustering method combined with TIM leads to significant gain on the achievable degrees of freedom.
, Available online  , doi: 10.1049/cje.2020.00.330
Abstract(163) HTML (66) PDF(17)
Abstract:
Protein localization information is essential for understanding protein functions and their roles in various biological processes. The image-based prediction methods of protein subcellular localization have emerged in recent years because of the advantages of microscopic images in revealing spatial expression and distribution of proteins in cells. However, the image-based prediction is a very challenging task, due to the multi-instance nature of the task and low quality of images. In this paper, we propose a multi-task learning strategy and mask generation to enhance the prediction performance. Furthermore, we also investigate effective multi-instance learning schemes. We collect a large-scale dataset from the Human Protein Atlas database, and the experimental results show that the proposed multi-task multi-instance learning model outperforms both single-instance learning and common multi-instance learning methods by large margins.
, Available online  , doi: 10.1049/cje.2021.00.282
Abstract(149) HTML (66) PDF(26)
Abstract:
In this paper, we propose a low complexity distributed approach to address the multitarget detection/tracking problem in the presence of noisy and missing data. The proposed approach consists of two components: a distributed flooding scheme for measurements exchanging among sensors and a sampling-based clustering approach for target detection/tracking from the aggregated measurements. The main advantage of the proposed approach over the prevailing Markov-Bayes-based distributed filters is that it does not require any priori information and all the information required is the measurement set from multiple sensors. A comparison of the proposed approach with the available distributed clustering approaches and the cutting edge distributed multi-Bernoulli filters that are modeled with appropriate parameters confirms the effectiveness and the reliability of the proposed approach.
, Available online  , doi: 10.1049/cje.2021.00.121
Abstract(153) HTML (60) PDF(21)
Abstract:
The weighted sampling methods based on k-nearest neighbors have been demonstrated to be effective in solving the class imbalance problem. However, they usually ignore the positional relationship between a sample and the heterogeneous samples in its neighborhood when calculating sample weight. This paper proposes a novel neighborhood-weighted based (NWBBagging) sampling method to improve the Bagging algorithm’s performance on imbalanced datasets. It considers the positional relationship between the center sample and the heterogeneous samples in its neighborhood when identifying critical samples. And a parameter reduction method is proposed and combined into the ensemble learning framework, which reduces the parameters and increases the classifier’s diversity. We compare NWBBagging with some stateof-the-art ensemble learning algorithms on 34 imbalanced datasets, and the result shows that NWBBagging achieves better performance.
, Available online  , doi: 10.1049/cje.2020.00.185
Abstract(79) HTML (27) PDF(9)
Abstract:
Carotid artery stenosis is a serious medical condition that can lead to stroke. Using machine learning method to construct classifier model, carotid artery stenosis can be diagnosed with transcranial doppler data. We propose an improved fuzzy support vector machine model to predict carotid artery stenosis, with the maximum geometric mean as the optimization target. The fuzzy membership function is obtained by combining information entropy with the normalized class-center distance. Experimental results showed that the proposed model was superior to the benchmark models in sensitivity and geometric mean criteria.
, Available online  , doi: 10.1049/cje.2021.00.309
Abstract(148) HTML (59) PDF(20)
Abstract:
The robustness of adversarial examples to image scaling transformation is usually ignored when most existing adversarial attacks are proposed. In contrast, image scaling is often the first step of the model to transfer various sizes of input images into fixed ones. We evaluate the impact of image scaling on the robustness of adversarial examples applied to image classification tasks. We set up an image scaling system to provide a basis for robustness evaluation and conduct experiments in different situations to explore the relationship between image scaling and the robustness of adversarial examples. Experiment results show that various scaling algorithms have a similar impact on the robustness of adversarial examples, but the scaling ratio significantly impacts it.
, Available online  , doi: 10.1049/cje.2020.00.417
Abstract(123) HTML (47) PDF(16)
Abstract:
With the recent increase in the number of Internet of Things (IoT) services, an intelligent scheduling strategy is needed to manage these services. In this paper, the problem of automatic choreography of microservices in IoT is explored. A type of reinforcement learning (RL) algorithm called TD3 is used to generate the optimal choreography policy under the framework of a softwaredefined network. The optimal policy is gradually reached during the learning procedure to achieve the goal, despite the dynamic characteristics of the network environment. The simulation results show that compared with other methods, the TD3 algorithm converges faster after a certain number of iterations, and it performs better than other non-RL algorithms by obtaining the highest reward. The TD3 algorithm can effciently adjust the traffc transmission path and provide qualified IoT services.
, Available online  , doi: 10.1049/cje.2021.00.059
Abstract(218) HTML (96) PDF(35)
Abstract:
Adopt Software-Definition technology to decouple the functional components of the industrial control system (ICS) in a service-oriented and distributed form is an important way for the industrial internet of things (IIOT) to integrate information technology (IT), communication technology (CT), and operation technology (OT). Therefore, this paper presents the concept of software-defined control architecture (SDCA) and describes the time consistency requirements under the paradigm shift of ICS architecture. By analyzing the physical clock and virtual clock mechanism models, the global clock synchronization space is logically divided into the physical and virtual clock synchronization domains, and a formal description of the global clock synchronization space is proposed. According to the fundamental analysis of the clock state model, the physical clock linear filtering synchronization model is derived, and a distributed observation fusion filtering model is constructed by considering the two observation modes of the virtual clock to realize the time synchronization of the global clock space by way of timestamp layer-by-layer transfer and fusion estimation. Finally, the simulation results show that the proposed model can significantly improve the accuracy and stability of clock synchronization.
, Available online  , doi: 10.1049/cje.2021.00.241
Abstract(96) HTML (40) PDF(6)
Abstract:
In rail transit systems, improving transportation efficiency has become a research hotspot. In recent years, a method of train control system based on virtual coupling has attracted the attention of many scholars. And the train operation control method is not only the key to realize the virtual coupling train operation control system but also the key to prevent accidents. Therefore, based on the existing research, a virtual coupled train dynamics model with nonlinear dynamics is established. Then, the recursive least square method based on the train running process data is used to identify the model parameters of the nonlinear dynamics virtual coupling train coupling process, and it is applied to the variable parameter artificial potential field(VAPF) to identify the parameters. A fusion controller based on feature-based generalized model prediction(GPC) and VAPF is used to control the virtual coupled train and prevent collision. Finally, a section of Beijing-Shanghai high-speed railway is taken as the background to verify the effectiveness of the proposed method.
, Available online  , doi: 10.1049/cje.2021.00.373
Abstract(124) HTML (53) PDF(15)
Abstract:
Optimal trajectory planning is a fundamental problem in the area of robotic research. On the time-optimal trajectory planning problem during the motion of a robotic arm, the method based on segmented polynomial interpolation function with a locally chaotic particle swarm optimization (LCPSO) algorithm is proposed in this paper. While completing the convergence in the early or middle part of the search, the algorithm steps forward on the problem of local convergence of traditional particle swarm optimization (PSO) and improved learning factor PSO (IFPSO) algorithms. Finally, simulation experiments are executed in joint space to obtain the optimal time and smooth motion trajectory of each joint, which shows that the method can effectively shorten the running time of the robotic manipulator and ensure the stability of the motion as well.
, Available online  , doi: 10.1049/cje.2021.00.276
Abstract(45) HTML (21) PDF(6)
Abstract:
A software ecosystem (SECO) can be described as a special complex network. Previous complex networks in an SECO have limitations in accurately reflecting the similarity between each pair of nodes. The community structure is critical towards understanding the network topology and function. Many scholars tend to adopt evolutionary optimization methods for community detection. The information adopted in previous optimization models for community detection is incomprehensive and cannot be directly applied to the problem of community detection in an SECO. Based on this, a complex network in SECOs is first built. In the network, the cooperation intensity between developers is accurately calculated, and the attribute contained by each developer is considered. A multi-objective optimization model is formulated. A community detection algorithm based on NSGA-II is employed to solve the above model. Experimental results demonstrate that the proposed method of calculating the developer cooperation intensity and our model are advantageous.
, Available online  , doi: 10.1049/cje.2022.00.038
Abstract(58) HTML (25) PDF(9)
Abstract:
Malware detection has been a hot spot in cyberspace security and academic research. We investigate the correlation between the opcode features of malicious samples and perform feature extraction, selection and fusion by filtering redundant features, thus alleviating the dimensional disaster problem and achieving efficient identification of malware families for proper classification. In the current cyberspace, malware authors use obfuscation technology to generate a large number of malware variants, which imposes a heavy analysis burden on security researchers and consumes a lot of resources in both time and space. To this end, we propose the MalFSM framework. Through the feature selection method, we reduce the 735 opcode features contained in the Kaggle dataset to 16, and then fuse on metadata features (count of file lines and file size) for a total of 18 features, and find that the machine learning classification is efficient and high accuracy. We analyzed the correlation between the opcode features of malicious samples and interpreted the selected features. Our comprehensive experiments show that the highest classification accuracy of MalFSM can reach up to 98.6% and the classification time is only 7.76s on the Kaggle malware dataset of Microsoft. Compared with similar research results, our method outperforms existing algorithms in terms of efficiency. It provides an opcode feature selection strategy for common researchers in the classification of homologous malicious families to reduce the laborious task of data preprocessing, feature selection, and sample classification based on general-purpose computing platforms.
, Available online  , doi: 10.1049/cje.2021.00.079
Abstract(380) HTML (174) PDF(41)
Abstract:
Phrase-indexed question answering (PIQA) seeks to improve the inference speed of question answering (QA) models by enforcing complete independence of the document encoder from the question encoder, and it shows that the constrained model can achieve significant efficiency at the cost of its accuracy. In this paper, we aim to build a model under the PIQA constraint while reducing its accuracy gap with the unconstrained QA models. We propose a novel framework—AnsDR, which consists of an answer boundary detector (AnsD) and an answer candidate ranker (AnsR). More specifically, AnsD is a QA model under the PIQA architecture and it is designed to identify the rough answer boundaries; and AnsR is a lightweight ranking model to finely re-rank the potential candidates without losing the efficiency. We perform the extensive experiments on public datasets. The experimental results show that the proposed method achieves the state of the art on the PIQA task.
, Available online  , doi: 10.1049/cje.2021.00.236
Abstract(304) HTML (135) PDF(41)
Abstract:
Thinking space came into being with the emergence of human civilization. With the emergence and development of cyberspace, the interaction between those two spaces began to take place. In the collision of thinking and technology, new changes have taken place in both thinking space and cyberspace. To this end, this paper divides the current integration and development of thinking space and cyberspace into three stages, namely Internet of brain (IoB), Internet of thought (IoTh), and Internet of thinking (IoTk). At each stage, the contents and technologies to achieve convergence and connection of spaces are discussed. Besides, the Internet of creation (IoC) is proposed to represent the future development of thinking space and cyberspace. Finally, a series of open issues are raised, and they will become thorny factors in the development of the IoC stage.
, Available online  , doi: 10.1049/cje.2021.00.221
Abstract(157) HTML (66) PDF(14)
Abstract:
Pre-mRNA splicing is an essential procedure for gene transcription. Through the cutting of introns and exons, the DNA sequence can be decoded into different proteins related to different biological functions. The cutting boundaries are defined by the donor and acceptor splice sites. Characterizing the nucleotides patterns in detecting splice sites is sophisticated and challenges the conventional methods. Recently, the deep learning frame has been introduced in predicting splice sites and exhibits high performance. It extracts high dimension features from the DNA sequence automatically rather than infers the splice sites with prior knowledge of the relationships, dependencies, and characteristics of nucleotides in the DNA sequence. This paper proposes the AttentionSplice model, a hybrid construction combined with multi-head self-attention, convolutional neural network (CNN), bidirectional long short-term memory (Bi-LSTM) network. The performance of AttentionSplice is evaluated on the Homo sapiens (Human) and Caenorhabditis Elegans (Worm) datasets. Our model outperforms state-of-the-art models in the classification of splice sites. To provide interpretability of AttentionSplice models, we extract important positions and key motifs which could be essential for splice site detection through the attention learned by the model. Our result could offer novel insights into the underlying biological roles and molecular mechanisms of gene expression.
, Available online  , doi: 10.1049/cje.2021.00.212
Abstract(87) HTML (28) PDF(3)
Abstract:
This paper proposed a novel design method for pyramid horns which are under the constraints of 3 dB beamwidth. It is based on the general radiation patterns of E\H planes derived from Huygens’ principle. Through interpolation and fitting techniques, the E\H plane’s maximum aperture error parameter of the pyramid horn is obtained as a function of the angle and aperture electrical size. Firstly, the aperture size of the E (or H) plane is calculated with the help of the optimal gain principle. Secondly, the constraint equation of another plane is derived. Finally, the intersection of constraint equation and interpolation function, which can be solved iteratively, contains all the solution information. The general radiation patterns neglect the influence of the Huygens element factor which makes the error bigger in large design beamwidth. In this paper, through theoretical analysis and simulation experiments, two correction formulas are employed to correct the Huygens element factor’s influence on the E\H planes. Simulation experiments and measurements show that the proposed method has a smaller design error in the range of 0-60 degrees half-power beamwidth.
, Available online  , doi: 10.1049/cje.2021.00.289
Abstract(156) HTML (58) PDF(15)
Abstract:
Compared with cloud computing environment, edge computing has many choices of service providers due to different deployment environments. The flexibility of edge computing makes the environment more complex. The current edge computing architecture has the problems of scattered computing resources and limited resources of single computing node. When the edge node carries too many task requests, the makespan of the task will be delayed. We propose a load balancing algorithm based on weighted bipartite graph for edge computing (LBA-EC), which makes full use of network edge resources, reduces user delay, and improves user service experience. The algorithm is divided into two phases for task scheduling. In the first phase, the tasks are matched to different edge servers. In the second phase, the tasks are optimally allocated to different containers in the edge server to execute according to the two indicators of energy consumption and completion time. The simulations and experimental results show that our algorithm can effectively map all tasks to available resources with a shorter completion time.