Abstract: Traditional photography focuses on the optimization of lenses for a perfect imaging system. However, with the great developments of computational resources and optical modulation devices, we can achieve more powerful imaging abilities with concise optics. Computational photography is such an emerging interdisciplinary field by incorporating computational strategy in traditional imaging system to break the limitations in various dimensions such as spectrum, time and space. Recent advances in different aspects have aroused great interests and introduced tremendous applications in biology, material science and computer vision.
Abstract: Discernibility matrix is a beautiful theoretical result to get reducts in the rough set, but the existing algorithms based on discernibility matrix share the same problem of heavy computing load and large store space, since there are numerous redundancy elements in discernibility matrix and these algorithms employ all elements to find reducts. We introduce a new method to compute attribute significance. A novel approach is proposed, called minimal element selection tree, which utilizes many strategies to eliminate redundancy elements in discernibility matrix. This paper presents two methods to find out a minimal reduct for a given decision table based this tree structure. The experimental results with UCI data show that the proposed approaches are effective and efficient than the benchmark methods.
Abstract: Obtaining the classification of urban functions is an integral part of urban planning. Currently, public bicycle systems are booming in these years. It conveys human mobility and activity information, which can be closely related to the social function of an urban region. This paper discusses the potential use of public bicycle systems for recognizing the social function of urban regions by using one year's rent/return data of public bicycles. We found that rent/return dynamics, extracted from public bicycle systems, exhibited clear patterns corresponding to the urban function classes of these regions. With seven features designed to characterize the rent/return pattern, our method based on Smooth support vector machine (SSVM) is proposed to recognize social function classes of urban regions. We evaluate our method based on the large-scale real-world dataset collected from the public bicycle system of Hangzhou. The results show that our method can efficiently recognize different types of urban function areas. Classification results using the proposed SSVM achieved the best classification accuracy of 96.15%.
Abstract: With the rapid development of network information technology and the wide application of smart phones, tablet PCs and other mobile terminals, online education plays an increasingly important role in social life. This article focuses on mining useful data from the massive online education data, by using transfer learning, relying on Hadoop, to construct Online education data classification framework (OEDCF), and design an algorithm Tr_MAdaBoost. This algorithm overcomes the traditional classification algorithms in which the required data must be restricted to independent and identically distributed data, since online education using this new algorithm can achieve the correct classification even it has different data distribution. At the same time, with the help of Hadoop's parallel processing architecture, OEDCF can greatly enhance the efficiency of data processing, create favorable conditions for learning analysis, and promote personalized learning and other activities of big data era.
Abstract: With the rapid development of the smart driving technology, the security of smart driving algorithms is becoming more and more important. Four core smart driving algorithms are determined by studying the architecture of smart driving algorithm. These algorithms comprise local path planning, pedestrian detection, lane detection and obstacle detection. The security issues of these algorithms are investigated by closely examining the work carried out by the algorithms. We found that there are vulnerabilities in all four algorithms. These vulnerabilities can cause abnormality and even road accidents for the smart cars. The final experiment shows that the vulnerabilities of these algorithms do exist under certain circumstances and therefore have high security risks. This study will lay a foundation to improve the security of the smart driving system.
Abstract: New formulae for point addition and point doubling on elliptic curves over prime fields are presented. Based on these formulae, an improved Montgomery algorithm is proposed. The theoretical analysis indicates that it is about 13.4% faster than Brier and Joye's Montgomery algorithm. Experiments on the elliptic curve over a 256-bit prime field recommended by the National Institute of Standards and Technology and over a 256-bit prime field in Chinese elliptic curve standard SM2 support the theoretical analysis.
Abstract: Rotation symmetric Boolean functions (RSBFs) have attracted widespread attention due to their good cryptographic properties. We present a new construction of RSBFs with optimal algebraic immunity on odd number of variables. The nonlinearity of the new function is much higher than other best known RSBFs with optimal algebraic immunity. The algebraic degree of the constructed n-variable RSBF can achieve the upper bound n-1 when n/2 is odd or when n/2 is a power of 2 for n ≥ 11. In addition, the constructed function can possess almost perfect immunity to fast algebraic attacks for n=11, 13, 15.
Abstract: Binding access policies to data, Ciphertext-policy attribute-based encryption (CP-ABE) enables data access control to be independent from a certain application and lets users face data directly. It is regarded as one of the most suitable access control methods in cloud storage system and gets the attention of extensive researches. In those researches, Hierarchical cryptography architecture (HCA) is often applied to improve the efficiency of the system. There exist two open issues:illegal leakage of symmetric keys and low efficiency of revocation of an attribute of a user. We propose an Access control scheme under Hierarchical cryptography architecture (ACS-HCA). In this scheme, key derivation mechanism and forward derivation function are used to avoid the leakage of symmetric keys, All-orNothing transform is used to prevent the illegal reuse of symmetric keys, and attribute revocation is realized without re-issuing other users' private keys. Analyses and simulations demonstrate that our scheme sustains less encrypting cost on each owner and less decrypting cost on each user, but gain high efficiency in revocation of an attribute of a user.
Abstract: The fair division theory has emerged as a promising approach for the throughput allocation of hybrid storage system. While most of them cause the unfair allocation of the throughput servicing time due to taking no consideration of clients' request IO latencies. We propose one Servicing time allocation method based on the client grouping mechanism(STCG), STCG establishes fair allocation policies based on clients' workload weights, which can achieve a fair throughput servicing time allocation, and can maximize the resource utilization. The experimental result shows that STCG's allocation enjoys fair properties in each client group, and compared with the method proposd recently, STCG can improve the resource utilization about 12%-330%.
Abstract: Accurate segmentation of Optic disc (OD) is significant for the automation of retinal analysis and retinal diseases screening. This paper proposes a novel optic disc segmentation method based on the saliency. It includes two stages:optic disc location and saliency-based segmentation. In the location stage, the OD is detected using a matched template and the density of the vessels. In the segmentation stage, we treat the OD as the salient object and formulate it as a saliency detection problem. To measure the saliency of a region, the boundary prior and the connectivity prior are exploited. Geodesic distance to the window boundary is computed to measure the cost the region spends to reach the window boundary. After a threshold and ellipse fitting, we obtain the OD. Experimental results on two public databases for OD segmentation show that the proposed method achieves thestate-of-the-art performance.
Abstract: Clustering is fundamental in many fields with big data. In this paper, a novel method based on Topological graph partition (TGP) is proposed to group objects. A topological graph is created for a data set with many objects, in which an object is connected to k nearest neighbors. By computing the weight of each object, a decision graph under probability comes into being. A cut threshold is conveniently selected where the probability of weight anomalously becomes large. With the threshold, the topological graph is cut apart into several sub-graphs after the noise edges are cut off, in which a connected subgraph is treated as a cluster. The compared experiments demonstrate that the proposed method is more robust to cluster the data sets with high dimensions, complex distribution, and hidden noises. It is not sensitive to input parameter, we need not more priori knowledge.
Abstract: Despite the large work made in interactive mesh deformation, manipulating a geometrically complex mesh and producing realistic deformation result is still a challenging work. Example-driven deformation methods distinctly simplify the modeling process and produce realistic deformation result by incorporating knowledge learned from shape space. We introduce a rotation invariant feature representation and a reconstruction framework to accurately reconstruct the vertex positions. Our feature representation allows both interpolation as well as extrapolation and can effectively blend multiple shapes. Based on this, we achieve an example-driven approach to mesh deformations. By using a collection of models as examples, our method produces natural deformation results guided by them even with large movement of handles. We will apply our representation and reconstruction method to semantic deformation transfer. The experimental results have demonstrated the effectiveness of the proposed methods.
Abstract: The problem of approximation of homogeneous random field from asymmetric local average sampling is considered in this paper. As a general sampling result, a sufficient condition is obtained to ensure the homogeneous random field be approximated from local averages with probability 1, which extended the result for weak sense stochastic process from local averages to homogeneous random field.
Abstract: False alarms and misdetections caused by the carrier leakage problem attract increasing attention for resource-limited platforms. To tackle the problem, we propose a target protected carrier leakage cancellation algorithm to rebuild the leakage signal from reference cells with those cells randomly updated based on each detection result to exclude the targets. The proposed algorithm eliminates the carrier leakage effectively without introducing false target due to the non-involvement of the target information in the reference cells based on a random update strategy. The noise level of our proposed algorithm is less than that of the commonly used algorithms such as the Cell average constant false alarm rate (CA-CFAR) detection method. As verified in the simulations, the proposed algorithm performs better than the CA-CFAR method.
Abstract: Long short-term memory RNNs (LSTMRNNs) have shown great success in the Automatic speech recognition (ASR) field and have become the state-ofthe-art acoustic model for time-sequence modeling tasks. However, it is still difficult to train deep LSTM-RNNs while keeping the parameter number small. We use the highway connections between memory cells in adjacent layers to train a small-footprint highway LSTM-RNNs (HLSTM-RNNs), which are deeper and thinner compared to conventional LSTM-RNNs. The experiments on the Switchboard (SWBD) indicate that we can train thinner and deeper HLSTM-RNNs with a smaller parameter number than the conventional 3-layer LSTM-RNNs and a lower Word error rate (WER) than the conventional one. Compared with the counterparts of small-footprint LSTMRNNs, the small-footprint HLSTM-RNNs show greater reduction in WER.
Abstract: This study attempted to answer complicated free-description questions in Chinese Gaokao Reading comprehension (RC) tasks. We found that quite a few questions can be answered by extracting sentences from the document and combining them, so we used a pipeline approach with two components:Answer sentence extraction (ASE) and Answer sentence fusion (ASF). Semantic vector similarity and topical distribution similarity were explored for ASE. Integer linear programming strategy was used for ASF, which combined dependencies with the language model, based on word importance. As a first step towards the new challenge, we obtained some encouraging results on actual exam questions in Chinese subject's RC tasks of Beijing Gaokao, which helped us obtain insights into techniques needed to solve real-word complex questions.
Abstract: The major challenge that text sentiment classification modeling faces is how to capture the intrinsic semantic, emotional dependence information and the key part of the emotional expression of text. To solve this problem, we proposed a Coordinated CNNLSTM-Attention(CCLA) model. We learned the vector representations of sentence with CCLA unit. Semantic and emotional information of sentences and their relations are adaptively encoded to vector representations of document. We used softmax regression classifier to identify the sentiment tendencies in the text. Compared with other methods, the CCLA model can well capture the local and long distance semantic and emotional information. Experimental results demonstrated the effectiveness of CCLA model. It shows superior performances over several state-of-the-art baseline methods.
Abstract: In order to overcome some problems caused by improper parameters selection when applying Least mean square (LMS), Normalized LMS (NLMS) or Recursive least square (RLS) algorithms to estimate coefficients of second-order Volterra filter, a novel DavidonFletcher-Powell-based Second-order Volterra filter (DFPSOVF) is proposed. Analysis of computational complexity and stability are presented. Simulation results of system parameter identification show that the DFP algorithm has fast convergence and excellent robustness than LMS and RLS algorithm. Prediction results of applying DFPSOVF model to single step predictions for Lorenz chaotic time series illustrate stability and convergence and there have not divergence problems. For the measured multiframe speech signals, prediction accuracy using DFPSOVF model is better than that of Linear prediction (LP). The DFP-SOVF model can better predict chaotic time series and the real measured speech signal series.
Abstract: Granular computing is a very hot research field in computer science in recent years. This paper introduces a new granular computing model based on algebraic structure, in which the granule structure is assumed as a binary operator and the granulation is based on a congruence relation. Following the homomorphic consistency principle, the methods of granulation (granularity coarsening) and granularity combination (granularity refinement) are introduced, and the corresponding numerical examples show that these methods are efficient and applicable. These works have enriched the granular computing models from structure and provided theoretical basis for the combination of granular computing theory and algebraic theory.
Abstract: By combing Spatial modulation (SM) with the Amplify-and-forward (AF) cooperative communication, a cooperative SM scheme with AF (AF-SM) protocol is presented, and the corresponding Bit error rate (BER) performance is investigated. Based on the performance analysis, the error probability of antenna index estimation (Pa) and the error probability of symbol estimation (Pd), which constitutes the overall average BER, are derived, respectively. As a result, tightly approximate closedform expressions of Pa and Pd are attained, respectively. Using these expressions, the closed-form overall average BER is achieved. During the analysis, the computational complexity of AF-SM detector is also provided in terms of the complex addition and multiplication numbers. Besides, the asymptotic BER at high SNR and the corresponding diversity gain are further derived, and the resultant diversity gain of Nr + 1 is obtained for the system with Nr receive antennas. Simulation results show that our theoretical analysis is valid, and provides good performance evaluation method for cooperative SM system. Moreover, the BER performance can be effectively improved as Nr increases due to higher diversity gain.
Abstract: We investigate a wireless-powered cooperative relay network, where an energy constrained relay node accumulates the energy harvested from radio frequency signals, and then assists source signal transmission. An adaptive cooperative transmission scheme for cooperative relay network is developed. The relay helps to transmit the source signal only if it has harvested enough energy and the channels between source and relay do not suffer from an outage. Before each transmission block, one out of two transmission modes, i.e., Half-duplex (HD) or Full-duplex (FD), is dynamically adopted based on the maximal instantaneous capacity of the system. A closed-form expression for the exact outage probability of the system with the proposed scheme is derived. Monte Carlo simulations are run to validate the accuracy of the mathematical analysis, and numerical results show that the proposed protocol outperforms the existing fixed cooperative transmission modes.
Abstract: This paper analyzed the existing network security situation evaluation methods and discovered that they cannot accurately reflect the features of large-scale, synergetic, multi-stage gradually shown by network attack behaviors. For this purpose, the association between attack intention and network configuration information was deep analyzed. Then a network security situation evaluation method based on attack intention recognition was proposed. Unlike traditional method, the evaluation method was based on intruder. This method firstly made causal analysis of attack event and discovered and simplified intrusion path to recognize every attack phases, then realized situation evaluation based on the attack phases. Lastly attack intention was recognized and next attack phase was forecasted based on achieved attack phases, combined with vulnerability and network connectivity. A simulation experiments for the proposed network security situation evaluation model is performed by network examples. The experimental results show that this method is more accurate on reflecting the truth of attack. And the method does not need training on the historical sequence, so the method is more effective on situation forecasting.
Abstract: As an emerging network technology, Software-defined network (SDN), has been rapidly developing for recent years due to its advantage in network management and updating. There are still a lot of open problems while applying this novel technology in reality, especially for meeting security demands. The Address resolution protocol (ARP) spoofing, a representative network attack in traditional networks is investigated. We implement the ARP spoofing in SDN network firstly and find that the threat of ARP attack still exists and has big impact on the network. We propose a novel mechanism as defense solution for ARP spoofing oriented to OpenFlow platform. Theoretical analyzation is given, and the mechanism is implemented as a module of POX controller. Experiment results and performance evaluations show that our solution can reduce the security threat of ARP spoofing remarkably on OpenFlow platform and related SDN platforms.
Abstract: Cloud data confidentiality need to be audited for the data owner's concern. Confidentiality auditing is usually based on logging schemes, whereas cloud data dynamics and sharing group dynamics result in massive logs, which makes confidentiality auditing a formidable task for user with limited resources. So we propose a public auditing scheme for data confidentiality, in which user resorts to a Third-party auditor (TPA) for auditing. Our scheme design a special log called attestation in which hash user pseudonym is used to preserve user privacy. Attestation-based data access identifying is presented in our scheme which brings no new vulnerabilities toward data confidentiality and no extra online burden for user. We further support accountability of responsible user for data leakage based on user pseudonym. Extensive security and performance analysis compare our scheme with existing auditing schemes. Results indicate that the proposed scheme is provably secure and highly efficient.
Abstract: In social networks, the most studies focus on the trust prediction, but distrust cannot get enough attention. The distinct characteristics of distrust relations present challenges to traditional relation prediction. Distrust relations are very sparse in social network, and negative interaction data is too little. We embark on the problem to investigate the distrust prediction with only network topology. After achieving seven social distrusting-inducing factors, we adopt machine learning and optimization methods to model the prediction process. The framework of Distrust prediction in Signed social network (DP-SSN) is proposed, which can predict distrust relations without any interaction data. Empirically, we perform extensive experiments on real-world data to corroborate the effectiveness of the proposed framework.
Abstract: This paper addresses the problem of target localization using Bistatic range (BR) measurements in a distributed multistatic passive radar system. The rangebased positioning technique employs multiple transmitterreceiver pairs, which provide separate BR measurements. Based on the Maximum likelihood (ML) function, an efficient algebraic Approximate maximum likelihood (AML) algorithm for single target localization is proposed. The closed-form AML solution has neither initial condition requirements nor convergence difficulty. Simulations are included to compare its performance to that of the CramerRao lower bound (CRLB) and the Two-step Weighted least squares (TS-WLS) algorithm. The proposed method is shown to be able to achieve the CRLB accuracy under Gaussian measurement noise. It is more robust to noise than the TS-WLS method, and presents relative insensitivity to target-sensor geometry.
Abstract: To distinguish the dielectric surface breakdown from the atmospheric breakdown, one method for atmospheric breakdown experiment by loading dielectric focusing lens is proposed. The focusing characteristics of dielectric focusing lens are investigated for the microwave frequency on the order of 1.3GHz to 9.3GHz, the lens focal length on the order of 0.3m to 0.9m and the dielectric constant on the order of 0.3 to 0.9. Simulations show that higher frequency, shorter focal length, and fitter dielectric constant result in better focus effect that the rising and falling edges of electric field strength change faster near the focus which is closer to the theoretical value. A dielectric focusing lens for the S-band is optimally designed with the polytetrafluoroethylene of 0.32m thickness, 0.6m diameter, and 0.4m focal length. The focus indicator of dielectric lens is measured and the atmospheric breakdown experiment is carried out. The experimental values of atmospheric breakdown are consistent with the theoretical values. The electric field strength with the lens at the focus is 5-6 times greater than that without focusing lens. The image of atmospheric breakdown by the dielectric lens focusing method experiment is clearer than that by the direct radiation method.
Abstract: A novel shaped-beam pattern synthesis method of generating an arbitrary footprint pattern under an arbitrary array structure is proposed. The embedded element pattern information is included in the pattern model, and a new cost function composed of three optimization items which are sidelobe levels, mainlobe ripples and mainlobe gains is constructed through the model. During the optimization process, to further enhance the performance of the synthesized pattern, a weight matrix is designed and added to the optimization items, then the generalized Rayleigh quotient approximation method is used to obtain the final array element excitations. To illustrate the good performance of the provided method, several synthesis examples with different arrays and footprints are presented. Compared with other algorithms such as Weighted alternating reverse projection (WARP) method, the proposed method can achieve better results with lower sidelobe levels, smaller mainlobe ripples and less computational load.
Abstract: High precision navigation with positioning accuracy of decimeter or even centimeter is becoming more and more important in many fields. Accurate carrier phase observations are the prerequisites for this requirement. Because they have much lower measurement noises than the code observations which are primarily used nowadays. However, the existence of integer ambiguity when counting cycles of carrier phases prohibits the straightforward application of the measurements. Once the integer ambiguity has been resolved, the application of carrier phase measurement is almost equivalent to pseudorange however with much higher precision. Therefore ambiguity resolution is key for fast and high precision positioning. The success rate of ambiguity resolution is affected by the ionospheric delay and observation noise. In previous methods, ionospheric errors are simply ignored. To resolve ambiguity reliably and quickly, we propose a modified method for precise point positioning. We analyze the characteristics of the dual-frequency combinations of original observations and the ones with longer wavelengths and lower noises are preferable. The wide-lane and sub-wide-lane combinations with higher success rates are chosen for ambiguity resolution. Extra pseudoranges are included to eliminate the ionospheric delay which hampers ambiguity resolution seriously. After the combined ambiguities have been resolved, the original ambiguities of each frequency can be calculated from the linearly independent equations. Based on real Global positioning system (GPS) navigation data, performances of the modified method are tested and compared with that of the traditional method. The results show that the modified method is less affected by ionospheric delay and can obtain more accurate ambiguity resolution.