Online First Papers are peer-reviewed and accepted for publication. Note that the papers under this directory, they are posted online prior to technical editing and author proofing. Please use with caution.
Abstract: Software trustworthiness is an important criterion for evaluating software quality. In component-based software, different components play different roles and different users give different grades of trustworthiness after using the software. These elements will both affect the trustworthiness of software. When the software quality is evaluated comprehensively, it is necessary to consider the weight of component and user feedback. According to different construction of components, the different trustworthiness measurement models are established based on the weight of components and user feedback. Algorithms of these trustworthiness measurement models are designed in order to obtain the corresponding trustworthiness measurement value automatically. The feasibility of these trustworthiness measurement models is demonstrated by a train ticket purchase system.
Abstract: Cross-project defect prediction is a hot topic in the field of defect prediction. How to reduce the difference between projects and make the model have better accuracy is the core problem. This paper starts from two perspectives: feature selection and distance-weight instance transfer. We reduce the differences between projects from the perspective of feature engineering and introduce the transfer learning technology to construct a cross-project defect prediction model WCM-WtrA and multi-source model Multi-WCM-WTrA. We have tested on AEEEM and ReLink datasets, and the results show that our method has an average improvement of 23% compared with TCA + algorithm on AEEEM datasets, and an average improvement of 5% on ReLink datasets.
Abstract: Deep reinforcement learning (DRL), which combines deep learning with reinforcement learning, has achieved great success recently. In some cases, however, during the learning process agents may reach states that are worthless and dangerous where the task fails. To address the problem, we propose an algorithm, referred as Environment comprehension mechanism (ECM) for deep reinforcement learning to attain safer decisions. ECM percepts hidden dangerous situations by analyzing object and comprehending the environment, such that the agent bypasses inappropriate actions systematically by setting up constraints dynamically according to states. ECM, which calculates the gradient of the states in Markov tuple, sets up boundary conditions and generates a rule to control the direction of the agent to skip unsafe states. ECM is able to be applied to basic deep reinforcement learning algorithms to guide the selection of actions. The experiment results show that the algorithm promoted safety and stability of the control tasks.