Abstract: Cross-user deduplication is an emerging technique to eliminate redundant uploading in cloud storage. Its deterministic response indicating the existence of data creates a side channel to attackers, which makes the privacy in the cloud at risk. Such kind of attack as well as further appending chunks attack, still cannot be well resisted in current solutions, thus is becoming a big obstacle in using this technique. We propose a secure cross-user deduplication, called Request merging based deduplication scheme (RMDS), which takes the lead to consider resistance against appending chunks attack in a lightweight way, let alone side channel attack. We utilize the proposed XOR based chunk-level server-side storage structure together with a request merging strategy to obfuscate attackers in minimized communication overhead. The experiment results show that, with security guaranteed, the proposed scheme is more efficient comparing with the state of the art.
Abstract: Si and Ding proposed a stream cipher with two keys (the first and the second key) and an expected security strength. To further measure the security, we analyze the stream cipher by considering the selective discrete Fourier spectra attack and the fast selective discrete Fourier spectra attack. The two attacks reveal a fact that the second key is more important than the first key, that is, if the second key is leaked out, the first key can be obtained with a lower time complexity than that of the expected security. In addition, we analyze the ability of the stream cipher to resist the guess-anddetermine attack. The results show an attacker is able to gain the two keys with an exponentially improved time complexity and a polynomial data complexity. It implies that we need a securer permutation over finite fields to design a new binary additive stream cipher to achieve the expected security level.
Abstract: Electronic health record (EHR), as the core of the e-healthcare system, is an electronic version of patient medical history, which records personal healthrelated information. EHR embodies the value of disease monitoring through large-scale sharing via the Cloud service provider (CSP). However, the health data-centric feature makes EHR more preferable to the adversaries compared with other outsourcing data. Moreover, there may even be malicious users who deliberately leak their access privileges for benefits. An e-healthcare system with a black-box traceable and robust data security mechanism is presented for the first time. Specifically, we propose an effective P2HBT, which can perform fine-grained access control on encrypted EHRs, prevent the leakage of privacy contained in access policies, and support tracing of traitors. Under the standard model, the scheme is proved fully secure. Performance analysis demonstrates that P2HBT can achieve the design goals and outperform existing schemes in terms of storage and computation overhead.
Abstract: As a kind of generators of pseudorandom sequences, the Feedback shift register (FSR) is widely used in channel coding, cryptography and digital communication. A necessary and sufficient condition for the nonsingularity of a feedback shift register of degree at most three over a finite field is established. Using the above result, we can easily determine the nonsingularity of a feedback shift register from the algebraic normal form of the corresponding feedback function.
Abstract: Digital signature is one of the most important cryptography primitives. Recently, more and more works have been done to construct signatures over lattice problems to keep them secure in the quantum age. Among them, a ring-based signature scheme named Dilithium is the most efficient one and a candidate in the third round of the National Institute of Standards and Technology's post-quantum cryptography project. To make those schemes work well in large network, we constructed the first ring-based Identity-based signature (IBS) scheme for light-weight authentication. The construction in this paper relies on the transformations introduced by Bellare et al. in Journal of Cryptology (Vol.22, No.1, pp.1–61, 2009) and its security can be proved under the hardness of ringlearning with errors problem in the random oracle model. Due to better trapdoor and polynomial ring setting, our proposed scheme are much better than the previous ones in terms of both computation and communication complexities.
Abstract: Traditional crowdsourcing based on centralized management platform is vulnerable to Distributed denial of service (DDoS) attack and single point of failure. Combining blockchain technology with crowdsourcing can well solve the above problems, enabling users to realize peer-to-peer transactions and collaboration based on decentralized trust in distributed systems where nodes do not need to trust each other. Although the current methods have solved the above problems, task publishers select workers based on their reputation values, which has two disadvantages: subjectivity and difficulty in initial value setting. Due to the complexity of crowdsourcing network, there will be malicious users in the network. The requirement for anonymity protects both legitimate and malicious users. In order to solve these problems, we propose an attribute-based worker selection scheme using the private set intersection technology. Our scheme also realizes the malicious user identity disclosure function. A concrete example of our scheme is given.
Abstract: Frequent subgraph mining (FSM) is a subset of the graph mining domain that is extensively used for graph classification and clustering. Over the past decade, many efficient FSM algorithms have been developed with improvements generally focused on reducing the time complexity by changing the algorithm structure or using parallel programming techniques. FSM algorithms also require high memory consumption, which is another problem that should be solved. In this paper, we propose a new approach called Predictive dynamic sized structure packing (PDSSP) to minimize the memory needs of FSM algorithms. Our approach redesigns the internal data structures of FSM algorithms without making algorithmic modifications. PDSSP offers two contributions. The first is the Dynamic Sized Integer Type, a newly designed unsigned integer data type, and the second is a data structure packing technique to change the behavior of the compiler. We examined the effectiveness and efficiency of the PDSSP approach by experimentally embedding it into two state-of-the-art algorithms, gSpan and Gaston. We compared our implementations to the performance of the originals. Nearly all results show that our proposed implementation consumes less memory at each support level, suggesting that PDSSP extensions could save memory, with peak memory usage decreasing up to 38% depending on the dataset.
Abstract: A technical investigation, research and implementation is presented to correct column fixed pattern noise and black level in large array Complementary metal oxide semiconductor (CMOS) image sensor. Through making a comparison among reported solution, and give large array CMOS image sensor design and considerations, according to our previous analysis on non-ideal factor and error source of piecewise Digital to analog converter (DAC) in multi-channels, an improving accurate piecewise DAC with adaptive switch technique is developed. The research theory has verified by a high dynamic range and low column Fixed pattern noise (FPN) CMOS image sensor prototype chip, which consisting of 8320×8320 pixel array was designed and fabricated in 55nm CMOS 1P4M standard process. The chip active area is 48mm×48mm with a pixel size of 5.7μm×5.7μm. The measured results achieved a high intrinsic dynamic range of 75dB, a low FPN and black level of 0.06%, a low photo response non-uniformity of 1.5% respectively, and an excellent raw sample image taken by the prototype sensor.
Abstract: Analytical models for passive linear structures, like metallic traces, vias, are proposed for simulations at the package and Printed circuit board (PCB) levels. In the proposed method, traces are modeled based on the transmission line theory, whereas the vias are described by the parallel-plate impedance and several equivalent circuits elements. The proposed models can be applied to efficiently simulate composed passive linear structures. Several scenarios are analyzed including traces with two or three width, traces routed into different layers and interconnects commonly used in PCBs. The results of the models are compared with those from the fullwave simulations and experiments. An improvement on the computation speed has been observed with respect to the full-wave simulations at the effective range of models. In our measurements, a compensation approach of impedance mismatch in parameter measurements is analyzed and calculated, which could significantly simplify the experimental process.
Abstract: Motivated by the problems of nonuniversality and over-reliance on the original reference image in High dynamic range (HDR) Image quality assessment (IQA), a convolutional neural network-based algorithm for no-reference HDR image quality assessment is proposed. The Salience detection by self-resemblance (SDSR) algorithm which extracts the salient regions of the HDR image, is used to simulate the human visual attention mechanism. Then a visual quality perception network for training quality prediction models is designed according to the visual characteristics of luminance and contrast sensitivity. And this network consists of an Error estimation network (Error-net), a Perceptual resistance network (PR-net) and a mixing function. The experimental results indicate that the method proposed has high consistency with subjective perception, and the value of assessment metrics Spearman rank-order correlation coefficient (SROCC), Pearson product-moment correlation coefficient (PLCC) and Root mean square error (RMSE) correspondingly reaches 0.941, 0.910 and 8.176 as well. It is comparable with classic full-reference HDR IQA methods.
Abstract: A two-level hierarchical scheme for videobased person re-identification (re-id) is presented, with the aim of learning a pedestrian appearance model through more complete walking cycle extraction. Specifically, given a video with consecutive frames, the objective of the first level is to detect the key frame with lightweight Convolutional neural network (CNN) of PCANet to reflect the summary of the video content. At the second level, on the basis of the detected key frame, the pedestrian walking cycle is extracted from the long video sequence. Moreover, local features of Local maximal occurrence (LOMO) of the walking cycle are extracted to represent the pedestrian' s appearance information. In contrast to the existing walking-cycle-based person re-id approaches, the proposed scheme relaxes the limit on step number for a walking cycle, thus making it flexible and less affected by noisy frames. Experiments are conducted on two benchmark datasets: PRID 2011 and iLIDS-VID. The experimental results demonstrate that our proposed scheme outperforms the six state-of-art video-based re-id methods, and is more robust to the severe video noises and variations in pose, lighting, and camera viewpoint.
Abstract: Understanding sensorimotor neural circuits plays an important role in the study of behavioral mechanisms. By virtue of a relatively simple brain structure and sophisticated locomotion behaviors, insects are selected as comparative research subjects to discover the basic principles of neural science. Specific abdominal swing behaviors of tethered bees induced by optomotor response are realized. To model functionality of mushroom body in the optic-flow induced swing behaviors, a simplified 3-layer Spiking neural network (SNN) is proposed. Spike response model is used as the single neuron model in the proposed SNN, which is trained by supervised learning method. The computational model can accurately simulate and predict the bees' abdominal swing behaviors exhibiting ipsilateral direction and proportional frequencies with optic flow stimulus.
Abstract: Different living environments of cancer samples lead to different molecular mechanisms of cancer development, which in turn leads to different cancer subtypes. How to identify cancer subtypes is a key issue for the realization of precision medicine. With the development of high-throughput technologies, multi-omics data which can better understand different causes of cancer have emerged. However, the current methods of analyzing cancer subtypes using multi-omics data is mostly derived from population cancer sample data and ignores the differences between different cancer samples. Therefore, the joint analysis of multi-omics based on a single sample may reveal more information about the differences between individual cancers. A strategy for identifying cancer subtypes is proposed based on Single-sample information gain (SSIG) which construct sample feature matrix by considering the heterogeneity of sample. Applying this strategy to current popular subtype identification methods, cancer subtypes can be identified more accurately and the mechanism of cancer can be found from the perspective of a single sample. By comparing different methods in different clustering measure, and using survival analysis, it is shown that SSIG is more suitable for cancer subtype identification than the original multi-omics data, and it is easier to mine the cancer subtype classification mechanism hidden behind the data.
Abstract: A novel step-by-step linearization highorder Extended Kalman filter SH-EKF is designed for a class of nonlinear systems composed of linear functions and the product of several separable basic functions. The basic functions in the state and measurement models are defined as latent variables; the state and measurement models are equivalently formulated into pseudo-linear models based on the combination of the original variable and the latent variables; latent variables are regarded as new variables, and a dynamic linear model between each latent variable and other latent variables with original state is established; the measurement model is rewritten into the first-order linear product form between the current state and each latent variable; latent variables are solved by Kalman filter step by step, and a stepwise linearized high-order extended Kalman filter is designed. Illustration examples are presented to demonstrate the effectiveness of the new algorithm.
Abstract: This paper proposes a novel centralized Constant false alarm ratio (CFAR) detector for multistatic sonar systems. The detector employs the idea of Variability index (Ⅵ) CFAR detection, to adaptively select the matched detection algorithm in diversified undersea environments. All the echo data from mutistatic sonar receivers are transmitted into the centralized fusion center. Firstly, the background statistics of reference cells from different nodes are analyzed. Then choose one appropriate centralized detection algorithm according to the background statistics, which refers to the centralized Cell averaging CFAR (CA-CFAR), greatest of CFAR, order statistic CFAR detection algorithms. The performance of the proposed detector is analyzed by computer simulation and measured sonar data. The results show that, compared to the centralized CA-CFAR detector, the introduced centralized detector achieves a better robustness in multiply heterogeneous undersea environments.
Abstract: We propose an integrated path planning method for multiple automated guided vehicles performing logistics delivery within a real-world warehouse environment considering obstacles. By applying it on each vehicle, this proposed method enables the vehicle the vehicles have the capabilities for autonomous path planning. The path planning consists of three parts, K-means algorithm based task points clustering, genetic algorithm based task points ordering, and the probabilistic road map based best path search. Vehicle conflict resolution is depending on implementing the probabilistic road map construction considering the realistic map with obstacles. The simulations result validate that the clustering and ordering are necessary for the path planning, both the path planning time and the Automated guided vehicles (AGVs) running time can be dramatically reduced.
Abstract: The ideal fused results of infrared and visible images, should contain the important infrared objects, and preserve the visible textural detail information as much as possible. The fused images are more consistent with human visual perception effect. For this purpose, a novel infrared and visible image fusion framework is proposed. Under the guidance of the model, the source images are decomposed into largescale edge, small-scale textural detail and coarse-scale base level information. Among which, the large-scale edge information contains the main infrared features, on this basis, the infrared image is further segmented into the object, transition and background regions by OTSU multi-threshold segmentation algorithm. In the end, the fused weights for the decomposed sub-information are determined by the segmented results, so that, the infrared object information can be effectively injected into the fused image, and the important visible textural detail information can be preserved as much as possible in the fused image. Experimental results show that, the proposed method can not only highlight the infrared objects, but also preserve the visual information in the visible image as much as possible. The fused results are superior to the commonly used representative fusion methods, both in subjective perception and objective evaluation.
Abstract: With its characteristics of decentralization, security, data traceability, and tamper-resistance, the blockchain has been widely used in various domains. Considering the difference in the performance of the devices, the light client is proposed so that devices without the ability to store a full blockchain copy can also participate in the blockchain transactions. However, the light client has to communicate with full nodes and verify the authenticity of a transaction which brings in some extent of communication, computation, and storage overheads to the light client. These overheads cannot be ignored for some low-performance devices, such as embedded devices or IoT chips, and therefore the current light client scheme does not work in this situation. We propose LOPE (a Low-overhead payment vErification method) for poor-capacity nodes in the blockchain system. In LOPE, a grouping protocol is designed to partition full nodes into groups to serve the verification requests of the light client. In addition, Practical byzantine fault tolerance (PBFT) is used to ensure the light client to get a credible result in spite of a few dishonest nodes existing in the group. We conduct LOPE and evaluate it in a testbed. The experiment results show that LOPE reduces more than half of the communication overhead, degrades the computation overhead of the light client to a large extent, and avoids the storage overhead of the hash roots of block headers in the light client. We also conduct theoretical analysis to show the performance improvement and security issues of LOPE.
Abstract: Industrial Internet of things (IIoT) deploys a large number of smart devices to obtain industrial data, which will be transmitted to cloud for analysis to improve industrial productivity. The management of large-scale devices is complicated, and it's also a challenge to choose a high-quality cloud service for data analysis as the number of service with similar functions increases. To address these issues, we propose a reliable fog-cloud service solution with blockchain-based fog-cloud architecture. In fog layer, we build a management blockchain between fog servers and design a management method for industrial devices; In cloud layer, we construct a service blockchain between cloud service providers to form an open "service market". Quality of service and reputation based matching algorithm and reputation-based consensus algorithm are designed. The simulation results show correctness and efficiency of algorithms, and validate effectiveness of our proposed solution.
Abstract: IOTA is a typical blockchain designed for IoT applications. The Markov chain monte carlo algorithm (MCMC) used in IOTA may lead to a large number of unverified blocks, which increases transaction delay to a certain extent. We propose a Stable matching algorithm (SMA) based on matching theory to stimulate nodes to verify blocks, thereby reducing the number of unverified blocks and the consensus delay. The structure of our IoT blockchain uses the Directed acyc1ic graph (DAG) to improve the transaction processing capability. The nodes in the network are abstracted as transaction issuers and transaction verifiers. A verification service scheduling system is used to assign transactions to the verifiers and achieve the optimal matching. We designed a trust evaluation mechanism which offers verifiers references and awards to check transactions. The simulation results show that SMA can significantly reduce the number of orphan blocks and improve the transaction throughput, which helps to improve the reliability of the IoT blockchain.
Abstract: We present a novel scheme of Cyclic remote implementation of partially unknown quantum operations (CRIPUQO) via a six-qubit entangled state as the quantum channel. Alice can remotely apply a partially unknown quantum operation Ud (d = 0, 1) to Bob's qubit b, and Bob can remotely apply Ud to Charlie's site; meanwhile, Charlie can also remotely apply Ud to Alice's qubit c. It is pointed out that the CRIPUQO in the opposite direction can be perfectly achieved by changing the quantum channel. The scheme above is also generalized to systems having N observers, so that the CRIPUQO can be realized in quantum information networks with N observers in different directions by changing quantum channels.
Abstract: The existing ionospheric delay correction methods in Satellite based augmentation system (SBAS) were just for single constellation and over conservative. In this paper, we propose a new method to make more accurate ionospheric delay correction and tighter error bound orienting towards GPS and GLONASS. Considering that some Ionospheric pierce points (IPPs) are almost motionless, we make a filtering process to estimate the ionospheric delays at Ionospheric grid points (IGPs) to better reflect the ionospheric temporal variation. Then we conduct a simulation on ionospheric delay estimation at IGPs in dual constellations through planar fit method and filtering method. It can be concluded that the filtering method can bound the ionospheric delay correction error more tightly, simultaneously the ionospheric delay correction accuracy is improved a bit.
Abstract: A novel serially concatenated GMSK system is proposed for satellite communications subject to low SNR, limited power and spectrum resources. First, we design a Nonrecursive continuous-phase encoder (NRCPE) based GMSK based on the Rimoldi's decomposition. Then, a corresponding pilot-aided quasi-coherent demodulation algorithm is developed, whose basic principle is that a modified BCJR-based detection performs on the received signals with initial and ending trellisstates being determined using the very limited pilot overhead. Finally, we choose proper modulation parameters for the NRCPE based Gaussian minimum shift keying GMSK signaling according to the trade-offs between the power and spectral efficiency. The simulation results show that the LDPC coded GMSK system using the proposed algorithm can achieve excellent performance and can also work well in the presence of the large Doppler shifts and some burst errors.