Abstract: This timely special issue aims to summarize recent developments in AI and its applications in robot systems so as to provide a reference for the future research directions of AI and its integration with robotics.
Abstract: This paper presents a brief review on recent work on machine intelligence for real-world applications of robots. To act in a real world environment, a robot should possess a broad sense of intelligence including speech, perception, reasoning, action, etc. In this paper, we particularly deal with the intelligence involving action or body motion. The intelligence related to robot action/motion can be classified into two categories:manipulation intelligence and mobility intelligence. The manipulation intelligence means the skill/intelligence of reliably manipulating objects according to tasks and the mobility intelligence corresponds to the ability of autonomously moving, or flying, and or jumping in a natural environment. Human-robot interaction is another important topic for real-world applications. In addition to reviewing the major approaches, this paper also gives an overview on our efforts in these important topics.
Abstract: Given two data matrices X and Y, Sparse canonical correlation analysis (SCCA) is to seek two sparse canonical vectors u and v to maximize the correlation between Xu and Yv. Classical and sparse Canonical correlation analysis (CCA) models consider the contribution of all the samples of data matrices and thus cannot identify an underlying specific subset of samples. We propose a novel Sparse weighted canonical correlation analysis (SWCCA), where weights are used for regularizing different samples. We solve the L0-regularized SWCCA (L0-SWCCA) using an alternating iterative algorithm. We apply L0-SWCCA to synthetic data and real-world data to demonstrate its effectiveness and superiority compared to related methods. We consider also SWCCA with different penalties like Least absolute shrinkage and selection operator (LASSO) and Group LASSO, and extend it for integrating more than three data matrices.
Abstract: Programming control systems for mobile robots is complicated and time-consuming, due to three aspects, i.e., the robot behavior coordination, the distributed multi-robot cooperation and the robot software reusability. Subsumption model is a robust control architecture for mobile robots. ALLIANCE model extends it to multirobot systems, which is a fully distributed, fault-tolerant model. Robot operating system (ROS) provides a lot of reusable robot modules. By combining the above three, we propose a software framework named ALLIANCE-ROS for developing fault-tolerant cooperative multi-robot systems with abundant software resources available. We encapsulate the ROS facilities to build the framework prototype. We also use some high-performance plugin-based mechanism to optimize the bottom of the framework. One may use the framework-provided API conveniently to construct single-robot and multi-robot applications with all ROS resources available. This work is demonstrated by three application cases including an autonomous roving robot, a security patrol robot and multiple patrol robots. They are constructed and tested in both the simulated and the real environment. The experimental results validate the usability and availability of ALLIANCE-ROS.
Abstract: We present a new method to solve the recognition and localisation problem of a surgical needle for robot-assisted laparoscopy. Based on the observation from a single monocular laparoscopic image, we propose a new modelling method to parametrise the full 3D pose of the surgical needle by constrained Degrees of freedom (DOFs) using only two generalised variables. To obtain effective image feedback for the modelling, a feature segmentation algorithm is introduced from probabilistic linear constraints in RGB colour space, constructed from typical laparoscopic images. An iterative algorithm using gradient descent rule is implemented to converge the computed needle's pose to its real one. Experiments demonstrate the feasibility of the proposed scheme using laparoscopic torso models.
Abstract: We investigated two commonly used momentum algorithms, Classical momentum (CM) and Nesterov momentum (NM). We found that, when used in Restricted Boltzmann machine (RBM), they have two main problems:The first one is their performances are not obvious and not as good as expected. The second one is they may lose accelerating ability in the later stage of training process. Aiming at these two problems, we proposed the Weight momentum algorithm and evaluated our approach on four datasets. It has been demonstrated that our methods can achieve better performance under both reconstruction error and classification rate criterions.
Abstract: This paper proposes an efficient and robust Loop closure detection (LCD) method based on Convolutional neural network (CNN) feature. The primary method is called SeqCNNSLAM, in which both the outputs of the intermediate layer of a pre-trained CNN and the outputs of traditional sequence-based matching procedure are incorporated, making it possible to handle the viewpoint and condition variance properly. An acceleration algorithm for SeqCNNSLAM is developed to reduce the search range for the current image, resulting in a new LCD method called A-SeqCNNSLAM. To improve the applicability of A-SeqCNNSLAM to new environments, O-SeqCNNSLAM is proposed for online parameters adjustment in A-SeqCNNSLAM. In addition to the above work, we further put forward a promising idea to enhance SeqSLAM by integrating the both CNN features and VLAD's advantages called patch based SeqCNNSLAM (P-SeqCNNSLAM), and provide some preliminary experimental results to reveal its performance.
Abstract: Context-aware recommender systems, aiming to further improve performance accuracy and user satisfaction by fully utilizing contextual information, have recently become one of the hottest topics in the domain of recommender systems. However, not all contextual information might be relevant or useful for recommendation purposes, and little work has been done on measuring how important the contextual information for recommendation. We propose a heuristic optimization algorithm based on rough set theory and collaborative filtering to using contextual information more efficiently for boosting recommendation. Our approach involves three processes. First, significant attributes to represent contextual information are extracted and measured to identify recommended items using rough set theory. Second, the user similarity is evaluated in a target context consideration. Third, collaborative filtering is applied to recommend appropriate items. We perform an empirical comparison of three approaches on two real-world data sets. The experimental results show that the proposed approach generates more accurate predictions.
Abstract: On the basis of wavelet theory, a novel Adaptive wavelet thresholding method (AWT) is proposed for the ECG signal enhancement. The best base wavelet for ECG signal filtering can be automatically obtained through the cross correlation coefficient and the energy to entropy ratio. The variable universal threshold (VarUniversal) is applied to different decomposition level so as to suppress diverse noise. To achieve a smooth cut-off transition, an identical correlation shrinkage function (IcoShrinkage) is also adopted in the AWT according to its correlation coefficients with the hard thresholding and the soft thresholding. The performance of AWT is compared with four threshold approaches and six shrinkage functions, respectively, on the basis of 150 practical ECG signals of 30 subjects. The filtering results reveal that the AWT can adaptively choose an optimal base wavelet for a specific ECG signal. With the VarUniversal threshold and IcoShrinkage, the AWT obtains the better filtering results than the other compared methods.
Abstract: Recent uproar of fake news and misinformation on social media platforms has sparked the interest in the scientific community to automatically detect and refute them. The most popular research task to counteract misinformation, Rumour detection, requires repeated signals to reach adequate detection accurate. Consequently, rumour detection recognizes rumours only when they have started spreading and causing harm. We introduce a new task called "rumour prediction" that assesses the possibility of a document arriving from a social media stream becoming a rumour in the future. Note that rumour prediction differentiates itself from rumour detection through instant decision making. This allows refuting misinformation before it spreads and causes harm. Our approach to rumour prediction harnesses content based features in combination with novelty based features and pseudo feedback. Our experiments show that we are able to accurately predict, whether a document will become a rumour in the future. Additionally, we show how rumour prediction can significantly improve the accuracy of state-of-the-art Rumour detection systems.
Abstract: Low light level (LLL) images, which were captured by Intensified CCD (ICCD) camera equipped with an image intensifier, suffer low spatial resolution and contrast due to noise and dispersion. By dividing the integration time into intervals short enough, we obtain photon images where photon formed spots were nearly nonoverlapping. In order to enhance LLL images, we propose a Multi-layer slicing (MLS) photon localization algorithm based on photon images. The photon image is sliced by different planes. Photon spatial distribution (PSD) information is acquired by using projected area ratio, circularity and the number of slices. The enhanced LLL image is obtained by accumulating time-domain correlated PSD images. Experimental results show that the visual effects, spatial frequencies and contrast of the enhanced image are significantly improved.
Abstract: This study focuses on low-complexity synthesis of Exclusive-or sum-of-products expansions (ESOPs). A scalable cube-based method, which only uses iterative executions of cube intersection and subcover minimization for cube set expressions, is presented to obtain quasi-optimal ESOPs for completely specified multi-output functions. For deriving canonical Reed-Muller (RM) forms, four conversion rules of cubes are proposed to achieve fast conversion between a canonical form and an Exclusiveor sum-of-products (ESOP) or between different canonical forms. Numerical examples are given to verify the correctness of cube-based minimization and conversion methods. The proposed methods have been implemented in C language and tested on a large set of MCNC benchmark functions (ranging from 5 to 201 inputs). Experimental results show that, compared with existing methods, ours can reduce the number of cubes by 27% and save the CPU time by 74% on average in the final solution of minimization, and consume less time as well during the conversion process. As a whole, our methods are efficient in terms of both memory space and CPU time and can be able to deal with very large functions.
Abstract: A 14-bit pipelined Analog-to-digital converter (ADC) with a single-side digital self-calibration in a 0.18μm CMOS process is presented. The single-side foreground digital self-calibration is introduced to reduce the nonlinearity caused by capacitor mismatches. The ADC has a front-end Sample-and-hold (SH) circuit, followed by 13 1.5bit/stage sub-ADC and 2bit flash ADC at last. Test results show that, with a 140MHz input and 200MHz sampling rate, the SIAND is improved from 59dB to 66dB and SFDR is improved from 62dBc to 82dBc with the digital calibration. The measured SFDR reaches 77dBc even at 250MSps after calibration. The total power dissipation is 398mW at 250MSps including the parallel Low voltage differential signal (LVDS) output drivers.
Abstract: We accelerate a double precision Alternating direction implicit (ADI) solver for three-dimensional compressible Navier-Stokes equations from our in-house Computational fluid dynamics (CFD) software on the latest multi-core and many-core architectures (Intel Sandy Bridge CPUs, Intel Many integrated core (MIC) coprocessors and NVIDIA Kepler K20c GPUs). Some performance optimization techniques are detailed discussed. We provide an in-depth analysis on the performance difference between Sandy Bridge and MIC. Experimental results show that the proposed GPU-enabled ADI solver can achieve a speedup of 5.5 on a Kepler GPU in contrast to two Sandy Bridge CPUs and our optimization techniques can improve the performance of the ADI solver by 2.5-fold on two Sandy Bridge CPUs and 1.7-fold on an Intel MIC coprocessor. We perform a cross-platform performance analysis (between GPU and MIC), which serves as case studies for developers to select the right accelerators for their target applications.
Abstract: Increasing demand for better throughput and performance has motivated designers to come up with more sophisticated processors with innovative designs. Such designs for multicore architectures offer large amount of parallelism which is often underutilized and thus becomes overhead and liability. Due to these advancements, there has been a exponential increase in power consumption and heat dissipation of computing devices. Under these circumstances, an ideal system would be a reconfigurable system that can switch off all underutilized resources and work with only required ones. There is a need of reconfigurable computing devices and processors that are smart enough to configure themselves dynamically on runtime to find balance between throughput and power-consumption. This paper proposes a novel fuzzy logic based Dynamic voltage and frequency scaling (DVFS) and power-gating enabled controller which is capable of reducing power-consumption without affecting throughput and overall performance of the system. The design is implemented on Intel processor using Ubuntu as an operating system. Implementation results show that proposed Fuzzy logic controller (FLC) reduces power-consumption upto 40% by reconfiguring the processor dynamically without compromising the throughput.
Abstract: Data races are increasingly seen as concurrency bugs and they are difficult to reproduce and diagnose in parallel programs. Linux kernel is a large-scale software system, in which intensive thread-level parallelism and non-deterministic thread interleaving are more prone to race conditions. This paper conducts an investigation of real Linux kernel data races in recent 5 years. Our results show that there are about 500 real kernel data races reported and fixed in recent 5 years. File systems and drivers among all modules hold a much higher percentage of race conditions than other modules. We also conduct a case-bycase study on data races and graphically show how these data races are triggered with specific thread interleaving. Our analysis results are of interest to researchers and engineers who are committed to kernel data race detection and kernel development.
Abstract: Due to the mathematical modeling principle deficiency, the data-driven neural network and support vector machine methods have become the powerful basic methods for the exchange rate prediction. Based on the analysis of the characteristics of exchange rate time series data, the exchange rate prediction performance of Artificial neural network (ANN) and Least squares-Support vector machine (LS-SVM) is explored. The parameter optimization method of the two-times training is proposed. The fundamental principle of LS-SVM prediction is analysed in detail. By virtue of daily, monthly and quarterly data of three currency exchange rates, the prediction performance of LS-SVM is examined. The comparison is made with ANN prediction results based on the same data in relevant literature review. According to the experimental result, LS-SVM has better short-term prediction performance, and it is superior to ANN in most cases in terms of prediction precision.
Abstract: The stress wave sensor detect and process the electronic signal of friction, mechanical shock and dynamic load on equipment moving parts, the stress wave analysis are fulfilled by using the time domain and frequency domain feature extraction software, Polynomial neural network (PNN) and data fusion technology. The equipment status are quantitatively analyzed, the equipment fault are accurately predicted. Compared with the current adopted other analysis technologies, the system can monitor the operation condition of the equipment better in real-time, predict the fault earlier. The production safety is guaranteed, the equipment maintenance cost is reduced, and the production efficiency is improved.
Abstract: An object recognition method is proposed in this paper by introducing the spatial location relationship of objects into the context model. The spatial-position information of the objects is first utilized to model the context model. The model parameters and dependency structure of objects can be learned by integrating the context information into the same probabilistic framework. The image recognition is accomplished by using the advantages of efficient inference of the tree structure model. The proposed method can greatly improve the object recognition rate and better keep the consistency of scenes. The effectiveness of the proposed algorithm is verified by testing and comparing with other existing algorithms in the actual dataset.
Abstract: Until now, most Reversible data hiding (RDH) techniques have been evaluated by Peak signal-tonoise ratio (PSNR), which based on Mean squared error (MSE). Unfortunately, MSE turns out to be an extremely poor measure when the purpose is to predict perceived signal fidelity or quality. The Structural similarity (SSIM) index has gained widespread popularity as an alternative motivating principle for the design of image quality measures. How to utilize the characterize of SSIM to design RDH algorithm is very critical. We propose an optimal RDH algorithm under structural similarity constraint. We deduce the metric of the structural similarity constraint, and further we prove it does not hold Non-crossing-edges (NCE) property. We construct the rate-distortion function of optimal structural similarity constraint, which is equivalent to minimize the average distortion for a given embedding rate, and then we can obtain the optimal transition probability matrix under the structural similarity constraint. Experiments show that our proposed method can be used to improve the performance of previous RDH schemes evaluated by SSIM.
Abstract: Object detection plays an important role in the underwater object recognition technology of sonar equipment. We propose a Novel quantum-inspired shuffled frog leaping algorithm (NQSFLA) to obtain more accurate detection results in this paper. The proposed NQSFLA adopts a fitness function combining intra-class difference with inter-class difference to evaluate the frog position more accurately and a new quantum evolution update strategy to improve the searching ability in the searching process. In order to avoid the disadvantages of Quantuminspired shuffled frog leaping algorithm (QSFLA), a fuzzy membership matrix with spatial information model is developed, which can remove isolated regions and further improve the detection accuracy. Segmentation, distribution and noise entropy (SDNE) model is also proposed to quantitatively evaluate the detection results. The detection results of the original sonar images demonstrate the effectiveness and adaptability of the proposed method.
Abstract: An image encryption scheme based on confusion and diffusion is proposed. In the proposed scheme, original images are transformed into other forms, such as binary streams and ‘hyper-image’, firstly. Then the confusion process is achieved by permutation on the pixel level, the binary bit level, and the ‘hyper-image’ level respectively. Moreover, the diffusion process is operated on the ‘hyper-image’ level by bitwise eXclusive or (XOR) calculation with random sequence generated by a hyper-chaotic system. Furthermore, the central dogma in molecular biology is utilized in the construction of hyper-image, which demonstrates its superiority. Both the theoretical analysis and the experimental results have demonstrated the efficiency and validity of the proposed scheme, and images can be encrypted sufficiently through the three-time permutation and the chaotic substitution.
Abstract: As a variant of Finite mixture model (FMM), finite Inverted Dirichlet mixture model (IDMM) can not avoid the conventional challenges, such as how to select the appropriate number of mixture components based on the observed data. Towards easing these issues, we propose a variational inference framework for learning IDMM which has been proved to be an efficient tool for modeling vectors with positive elements. Compared with the conventional Expectation maximization (EM) algorithm commonly used for learning FMM, the proposed approach prevents over-fitting well. Furthermore, it is able to do automatic determination of the number of mixture components and parameters estimation, simultaneously. Experimental results on both synthetic and real data of object detection confirm significant improvements on flexibility and efficiency being achieved.
Abstract: With increasing traffic every day, most cities in the world are facing serious traffic problems, such as traffic accidents, congestion and air pollution. Despite the recent improvement of urban infrastructure, reasonable traffic light scheduling still plays an important role in alleviating these traffic problems. It is a great challenge to schedule a huge number of traffic lights efficiently. To solve this problem, we propose a Hybrid cellular swarm optimization method (HCSO) to optimize the scheduling of urban traffic lights. HCSO achieves an efficient and flexible scheduling, which includes the phase timing scheduling and the phase shifting scheduling. To formulate effective solutions for various traffic problems and achieve a globally dynamic scheduling, flexible and concise transition rules based on Cellular automaton (CA) are defined. And the Dynamic cellular particle swarm optimization algorithm (DCPSO) is proposed to find the optimal phase timing scheduling efficiently. Moreover, compared with the differential search algorithm method, the genetic algorithm method, the particle swarm optimization method, the comprehensive learning particle swarm optimization method and the random method in real cases, extensive experiments reveal that HCSO achieves obvious improvements under different traffic conditions.
Abstract: With the integration of smartphone into daily life, end users store a large amount of sensitive information into Android device. For protecting the sensitive information, a method of multi-booting Android OS from On-The-Go (OTG) device is proposed to meet the requirements of end users in different scenarios. The proposed method utilizes system domain isolation to guarantee the security of sensitive information on different Android OS. The difference with other solutions is that our proposed solution does not add additional components to Android OS, which makes the overhead of Android runtime has been effectively controlled. A prototype of the proposed method is implemented and deployed into the real android device to evaluate the effectiveness, the efficiency and the performance overhead. The experiment results show that the performance overhead is reasonable and our method can effectively mitigate the risk of sensitive information leakage when booting different Android instance in the same Android device.
Abstract: Recently several reduced-operation two Factor authentication (2FA) methods have been proposed to improve the usability of traditional 2FA. The existing works cannot protect the user's password from online guessing attack and offline dictionary attack. They cannot resist the identity fraud attack caused by co-located attackers who have obtained the victim's password. To solve these problems, we provide a WiFi-based 2FA approach which can enhance security of the reduced-operation 2FA but not increase the complexity of the operation for a user. We analyze our approach's security in terms of identity fraud attack resistance, salt guessing attack resistance, and password guessing attack resistance. We also implement a prototype system and test its performance in various scenarios, e.g. lab, library, and dormitory. The security analysis and experimental results show the effectiveness of our scheme for authentication.
Abstract: Signal-in-space (SIS) continuity is an important performance index of Global navigation satellite system (GNSS). However, the studies on the continuity of GNSS SIS are limited both at home and abroad, and are mainly based on the exponential distribution method. We first employ this method to analyze SIS failures of GPS and BDS, finding that this method is not flexile when describing the characteristic of GNSS SIS failures and the fitting characteristic of this method is not well. Therefore, we propose a method based on Weibull distribution to evaluate the performance of GNSS SIS continuity. Our method is compared with the models of exponential distribution, normal distribution, and Gamma distribution regarding the fitting characteristics of the interruption time interval of GPS SIS, the critical parameter of continuity. Results show that the fitting characteristic of the Weibull-distributionbased method is the best and can be used in various forms to describe reliability problem. Then our method is used to evalute the SIS continuity of BDS and GLONASS and its effectiveness and rationality are validated again. The contributions of our study lie in the development of a practical method for evaluating GNSS SIS continuity and a reference for GNSS performance.
Abstract: A novel analytic approach for the research of effect on electromagnetic performance of reflector antennas with deformation, including random processing error and system error caused by external loads (such as wind, gravity, and solar radiation), in panels of the reflector surface is proposed. The deformations are modeled as error intervals, and their impact on the electromagnetic performance, including radiation power pattern and some main electromagnetic characteristics (such as side lobe level, peak power, and half-power beamwidth), are efficiently estimated by Interval analysis (IA). The closed-form equations indicate the relationship between the error interval and the bounds (upper and lower bounds) of the radiated power pattern interval by exploiting the rules of interval arithmetic. Two kinds of surface errors (random and system error) and two shapes of deformation area (bump-like and sector) are taken into account in the numerical examples to assess the validation and effectiveness of the proposed approach. The results of the examples show that the proposed IA-based approach has great capability and effectiveness regarding the traditional statistical method (such as Monte-Carlo method).
Abstract: Among all the frequency estimation algorithm, the spectrum zooming method has superiority for high resolution and excellent anti-noise performance, but it needs more computational resource. The typical spectrum zooming methods include Zoom fast Fourier transform (ZFFT), Chirp-Z transform (CZT) and zeropadding, which are all uniform spectrum zooming methods. A Nonuniform spectrum zooming transform (NSZT) method with higher accuracy, better anti-noise ability and higher efficiency is presented. To verify the proposed method, the Monte-Carlo simulations are performed. Results are presented and compared with the Cramer-Rao bound (CRB) method, showing that the proposed algorithm has the least Mean square error (MSE) among these algorithms. This NSZT method is used in a 24GHz Frequency modulated continuous wave (FMCW) radar system and an experiment for the real-time ranging is conducted. The experimental results show that the ranging error of the radar system is about 5mm in 10m, which verifies the feasibility of our proposed method.
Abstract: Co-frequency interference is frequently encountered in the aerostat passive bistatic radar exploiting frequency modulated broadcasting signals. Independent component analysis (ICA) in frequency domain, one of the classic Blind source separation (BSS) algorithms, is utilized to recover the direct signal in the above scenarios. To solve the permutation problem of the frequency domain BSS, we use the method of correlation in time-frequency domain between the output of ICA and the coarse reference signal achieved by pattern synthesis with nulls in the direction of the interference or reference stations. In order to suppress the interferences and clutters, the separated direct signals of the reference and interference broadcasting signals are fed into a type of two dimensional adaptive filters in space and fast temporal domain. The processing scheme shows good performance in target detection, which is validated by computer simulation.